Science.gov

Sample records for adaptive color image

  1. Adaptive color contrast enhancement for digital images

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Luo, Yupin

    2011-11-01

    Noncanonical illumination that is too dim or with color cast induces degenerated images. To cope with this, we propose a method for color-contrast enhancement. First, intensity, chrominance, and contrast characteristics are explored and integrated in the Naka-Rushton equation to remove underexposure and color cast simultaneously. Motivated by the comparison mechanism in Retinex, the ratio of each pixel to its surroundings is utilized to improve image contrast. Finally, inspired by the two color-opponent dimensions in CIELAB space, a color-enhancement strategy is devised based on the transformation from CIEXYZ to CIELAB color space. For images that suffer from underexposure, color cast, or both problems, our algorithm produces promising results without halo artifacts and corruption of uniform areas.

  2. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  3. Multiobjective Image Color Quantization Algorithm Based on Self-Adaptive Hybrid Differential Evolution

    PubMed Central

    Xia, Xuewen

    2016-01-01

    In recent years, some researchers considered image color quantization as a single-objective problem and applied heuristic algorithms to solve it. This paper establishes a multiobjective image color quantization model with intracluster distance and intercluster separation as its objectives. Inspired by a multipopulation idea, a multiobjective image color quantization algorithm based on self-adaptive hybrid differential evolution (MoDE-CIQ) is then proposed to solve this model. Two numerical experiments on four common test images are conducted to analyze the effectiveness and competitiveness of the multiobjective model and the proposed algorithm. PMID:27738423

  4. Hierarchical prediction and context adaptive coding for lossless color image compression.

    PubMed

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  5. An efficient and self-adapted approach to the sharpening of color images.

    PubMed

    Kau, Lih-Jen; Lee, Tien-Lin

    2013-01-01

    An efficient approach to the sharpening of color images is proposed in this paper. For this, the image to be sharpened is first transformed to the HSV color model, and then only the channel of Value will be used for the process of sharpening while the other channels are left unchanged. We then apply a proposed edge detector and low-pass filter to the channel of Value to pick out pixels around boundaries. After that, those pixels detected as around edges or boundaries are adjusted so that the boundary can be sharpened, and those nonedge pixels are kept unaltered. The increment or decrement magnitude that is to be added to those edge pixels is determined in an adaptive manner based on global statistics of the image and local statistics of the pixel to be sharpened. With the proposed approach, the discontinuities can be highlighted while most of the original information contained in the image can be retained. Finally, the adjusted channel of Value and that of Hue and Saturation will be integrated to get the sharpened color image. Extensive experiments on natural images will be given in this paper to highlight the effectiveness and efficiency of the proposed approach. PMID:24348136

  6. Adaptive color image watermarking based on the just noticeable distortion model in balanced multiwavelet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Yuan; Ding, Yong

    2011-10-01

    In this paper, a novel adaptive color image watermarking scheme based on the just noticeable distortion (JND) model in balanced multiwavelet domain is proposed. The balanced multiwavelet transform can achieve orthogonality, symmetry, and high order of approximation simultaneously without requiring any input prefiltering, which makes it a good choice for image processing. According to the properties of the human visual system, a novel multiresolution JND model is proposed in balanced multiwavelet domain. This model incorporates the spatial contrast sensitivity function, the luminance adaptation effect, and the contrast masking effect via separating the sharp edge and the texture. Then, based on this model, the watermark is adaptively inserted into the most distortion tolerable locations of the luminance and chrominance components without introducing the perceivable distortions. Experimental results show that the proposed watermarking scheme is transparent and has a high robustness to various attacks such as low-pass filtering, noise attacking, JPEG and JPEG2000 compression.

  7. Adaptive Spread-Transform Dither Modulation Using a New Perceptual Model for Color Image Watermarking

    NASA Astrophysics Data System (ADS)

    Ma, Lihong; Yu, Dong; Wei, Gang; Tian, Jing; Lu, Hanqing

    Major challenges of the conventional spread-transform dither modulation (STDM) watermarking approach are two-fold: (i) it exploits a fixed watermarking strength (more particularly, the quantization index step size) to the whole cover image; and (ii) it is fairly vulnerable to the amplitude changes. To tackle the above challenges, an adaptive spread-transform dither modulation (ASTDM) approach is proposed in this paper for conducting robust color image watermarking by incorporating a new perceptual model into the conventional STDM framework. The proposed approach exploits a new perceptual model to adjust the quantization index step sizes according to the local perceptual characteristics of a cover image. Furthermore, in contrast to the conventional Watson's model is vulnerable to the amplitude changes, our proposed new perceptual model makes the luminance masking thresholds be consistent with any amplitude change, while keeping the consistence to the properties of the human visual system. In addition, certain color artifacts could be incurred during the watermark embedding procedure, since some intensity values are perceptibly changed to label the watermark. For that, a color artifact suppression algorithm is proposed by mathematically deriving an upper bound for the intensity values according to the inherent relationship between the saturation and the intensity components. Extensive experiments are conducted using 500 images selected from Corel database to demonstrate the superior performance of the proposed ASTDM approach.

  8. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue

  9. Adaptive optics retinal imaging reveals S-cone dystrophy in tritan color-vision deficiency

    NASA Astrophysics Data System (ADS)

    Baraas, Rigmor C.; Carroll, Joseph; Gunther, Karen L.; Chung, Mina; Williams, David R.; Foster, David H.; Neitz, Maureen

    2007-05-01

    Tritan color-vision deficiency is an autosomal dominant disorder associated with mutations in the short-wavelength-sensitive- (S-) cone-pigment gene. An unexplained feature of the disorder is that individuals with the same mutation manifest different degrees of deficiency. To date, it has not been possible to examine whether any loss of S-cone function is accompanied by physical disruption in the cone mosaic. Two related tritan subjects with the same novel mutation in their S-cone-opsin gene, but different degrees of deficiency, were examined. Adaptive optics was used to obtain high-resolution retinal images, which revealed distinctly different S-cone mosaics consistent with their discrepant phenotypes. In addition, a significant disruption in the regularity of the overall cone mosaic was observed in the subject completely lacking S-cone function. These results taken together with other recent findings from molecular genetics indicate that, with rare exceptions, tritan deficiency is progressive in nature.

  10. Adaptive color correction based on object color classification

    NASA Astrophysics Data System (ADS)

    Kotera, Hiroaki; Morimoto, Tetsuro; Yasue, Nobuyuki; Saito, Ryoichi

    1998-09-01

    An adaptive color management strategy depending on the image contents is proposed. Pictorial color image is classified into different object areas with clustered color distribution. Euclidian or Mahalanobis color distance measures, and maximum likelihood method based on Bayesian decision rule, are introduced to the classification. After the classification process, each clustered pixels are projected onto principal component space by Hotelling transform and the color corrections are performed for the principal components to be matched each other in between the individual clustered color areas of original and printed images.

  11. Color images in telepathology: how many colors do we need?

    PubMed

    Doolittle, M H; Doolittle, K W; Winkelman, Z; Weinberg, D S

    1997-01-01

    It is generally assumed that for telepathology, accurate depiction of microscopic images requires the use of "true color" (ie, 24 bits, eight bits each for red, green, and blue) in the digitized image used for transmission. If such a 24-bit color image file, which provides a palette of 16.7 million colors, could be reduced in size by decreasing the possible numbers of colors displayed in the image to 8 bits (palette of 256 colors), the image files would require less storage space, could be transmitted more rapidly, and would require less telecommunications bandwidth. However, such color reduction must not result in detectable image degradation, especially if the images are to be used for diagnosis. Therefore, we performed a carefully controlled study to determine whether pathologists could detect differences in the quality of microscopic images that were reduced from 24 to 8 bits of color. Thirty pathologists were each asked to view a set of 30 image pairs displayed on a computer monitor. Each image pair consisted of the original 24-bit color version and an 8-bit color-reduced version, derived using an adaptive color reduction algorithm with diffusion dithering. Observers were asked whether they could detect any difference in quality between the image pairs. Then, regardless of their answer, they were asked to choose the better quality image of the pair. Overall, there was not a statistically significant ability to consciously detect differences between the image pairs (P < .750). However, when forced to choose, there was a significant preference for the 8-bit images as being of "better quality" (P < .005). We conclude that telepathology applications may be able to take advantage of adaptive color reduction algorithms to reduce image file size without sacrificing image quality. Additional studies must be performed to determine the minimal image requirements for accurate diagnosis by telepatholgy.

  12. Color harmonization for images

    NASA Astrophysics Data System (ADS)

    Tang, Zhen; Miao, Zhenjiang; Wan, Yanli; Wang, Zhifei

    2011-04-01

    Color harmonization is an artistic technique to adjust a set of colors in order to enhance their visual harmony so that they are aesthetically pleasing in terms of human visual perception. We present a new color harmonization method that treats the harmonization as a function optimization. For a given image, we derive a cost function based on the observation that pixels in a small window that have similar unharmonic hues should be harmonized with similar harmonic hues. By minimizing the cost function, we get a harmonized image in which the spatial coherence is preserved. A new matching function is proposed to select the best matching harmonic schemes, and a new component-based preharmonization strategy is proposed to preserve the hue distribution of the harmonized images. Our approach overcomes several shortcomings of the existing color harmonization methods. We test our algorithm with a variety of images to demonstrate the effectiveness of our approach.

  13. Hard color-shrinkage for color-image processing of a digital color camera

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Ueda, Yasutaka; Fujii, Nobuhiro; Komatsu, Takashi

    2010-01-01

    The classic shrinkage works well for monochrome-image denoising. To utilize inter-channel color correlations, a noisy image undergoes the color-transformation from the RGB to the luminance-and-chrominance color space, and the luminance and the chrominance components are separately denoised. However, this approach cannot cope with signaldependent noise of a digital color camera. To utilize the noise's signal-dependencies, previously we have proposed the soft color-shrinkage where the inter-channel color correlations are directly utilized in the RGB color space. The soft color-shrinkage works well; but involves a large amount of computations. To alleviate the drawback, taking up the l0-l2 optimization problem whose solution yields the hard shrinkage, we introduce the l0 norms of color differences and the l0 norms of color sums into the model, and derive hard color-shrinkage as its solution. For each triplet of three primary colors, the hard color-shrinkage has 24 feasible solutions, and from among them selects the optimal feasible solution giving the minimal energy. We propose a method to control its shrinkage parameters spatially-adaptively according to both the local image statistics and the noise's signal-dependencies, and apply the spatially-adaptive hard color-shrinkage to removal of signal-dependent noise in a shift-invariant wavelet transform domain. The hard color-shrinkage performs mostly better than the soft color-shrinkage, from objective and subjective viewpoints.

  14. Edge detection of color images using the HSL color space

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Felix, Carlos E.; Myler, Harley R.

    1995-03-01

    Various edge detectors have been proposed as well as several different types of adaptive edge detectors, but the performance of many of these edge detectors depends on the features and the noise present in the grayscale image. Attempts have been made to extend edge detection to color images by applying grayscale edge detection methods to each of the individual red, blue, and green color components as well as to the hue, saturation, and intensity color components of the color image. The modulus 2(pi) nature of the hue color component makes its detection difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Normal edge detection of a color image containing adjacent pixels with hue of 0 and 2(pi) could yield the presence of an edge when an edge is really not present. This paper presents a method of mapping the 2(pi) modulus hue space to a linear space enabling the edge detection of the hue color component using the Sobel edge detector. The results of this algorithm are compared against the edge detection methods using the red, blue, and green color components. By combining the hue edge image with the intensity and saturation edge images, more edge information is observed.

  15. Region Adaptive Color Demosaicing Algorithm Using Color Constancy

    NASA Astrophysics Data System (ADS)

    Kim, Chang Won; Oh, Hyun Mook; Yoo, Du Sic; Kang, Moon Gi

    2010-12-01

    This paper proposes a novel way of combining color demosaicing and the auto white balance (AWB) method, which are important parts of image processing. Performance of the AWB is generally affected by demosaicing results because most AWB algorithms are performed posterior to color demosaicing. In this paper, in order to increase the performance and efficiency of the AWB algorithm, the color constancy problem is examined during the color demosaicing step. Initial estimates of the directional luminance and chrominance values are defined for estimating edge direction and calculating the AWB gain. In order to prevent color failure in conventional edge-based AWB methods, we propose a modified edge-based AWB method that used a predefined achromatic region. The estimation of edge direction is performed region adaptively by using the local statistics of the initial estimates of the luminance and chrominance information. Simulated and real Bayer color filter array (CFA) data are used to evaluate the performance of the proposed method. When compared to conventional methods, the proposed method shows significant improvements in terms of visual and numerical criteria.

  16. Image colorization based on texture map

    NASA Astrophysics Data System (ADS)

    Liu, Shiguang; Zhang, Xiang

    2013-01-01

    Colorizing grayscale images so that the resulting image appears natural is a hard problem. Previous colorization algorithms generally use just the luminance information and ignore the rich texture information, which means that regions with the same luminance but different textures may mistakenly be assigned the same color. A novel automatic texture-map-based grayscale image colorization method is proposed. The texture map is generated with bilateral decomposition and a Gaussian high pass filter, which is further optimized using statistical adaptive gamma correction method. The segmentation of the spatial map is performed using locally weighted linear regression on its histogram in order to match the grayscale image and the source image. Within each of the spatial segmentation, a weighted color-luminance correspondence is achieved by the results of locally weighted linear regression. The luminance-color correspondence between the grayscale image and the source image can thus be used to colorize the grayscale image directly. By considering the consistency of both color information and texture information between two images, various plausible colorization results are generated using this new method.

  17. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  18. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  19. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  20. Color transfer between high-dynamic-range images

    NASA Astrophysics Data System (ADS)

    Hristova, Hristina; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi

    2015-09-01

    Color transfer methods alter the look of a source image with regards to a reference image. So far, the proposed color transfer methods have been limited to low-dynamic-range (LDR) images. Unlike LDR images, which are display-dependent, high-dynamic-range (HDR) images contain real physical values of the world luminance and are able to capture high luminance variations and finest details of real world scenes. Therefore, there exists a strong discrepancy between the two types of images. In this paper, we bridge the gap between the color transfer domain and the HDR imagery by introducing HDR extensions to LDR color transfer methods. We tackle the main issues of applying a color transfer between two HDR images. First, to address the nature of light and color distributions in the context of HDR imagery, we carry out modifications of traditional color spaces. Furthermore, we ensure high precision in the quantization of the dynamic range for histogram computations. As image clustering (based on light and colors) proved to be an important aspect of color transfer, we analyze it and adapt it to the HDR domain. Our framework has been applied to several state-of-the-art color transfer methods. Qualitative experiments have shown that results obtained with the proposed adaptation approach exhibit less artifacts and are visually more pleasing than results obtained when straightforwardly applying existing color transfer methods to HDR images.

  1. Color hard copy requirements for medical imaging

    NASA Astrophysics Data System (ADS)

    Cargill, Ellen B.

    1995-04-01

    Traditionally, color mapping has not been utilized for diagnostic medical imaging. Color mapping was not possible prior to the emergence of electronic imaging modalities. Diagnostic imaging is considered in view of its purpose and goals as distinguished from photographic and scientific imaging. The applications for color in digital imaging modalities are discussed, as well as research directions for color utilized as a means of increasing the information density available to an observer. Requirements for color hardcopy are discussed.

  2. Image subregion querying using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2002-01-01

    A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.

  3. Transfer color to night vision images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Jing, Zhongliang; Liu, Gang; Li, Zhenhua

    2005-08-01

    Natural color appearance is the key problem of color night vision field. In this paper, the color mood of daytime color image is transferred to the monochromic night vision image. This method gives the night image a natural color appearance. For each pixel in the night vision image, the best matching pixel in the color image is found based on texture similarity measure. Entropy, energy, contrast, homogeneity, and correlation features based on co-occurrence matrix are combined as texture similarity measure to find the corresponding pixels between the two images. We use a genetic algorithm (GA) to find the optimistic weighting factors assigned to the five different features. GA is also employed in searching the matching pixels to make the color transfer algorithm faster. When the best matching pixel in the color image is found, the chromaticity values are transferred to the corresponding pixel of the night vision image. The experiment results demonstrate the efficiency of this natural color transfer technique.

  4. Factors of Incomplete Adaptation for Color Reproduction Considering Subjective White Point Shift for Varying Illuminant

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Lee, Myoung-Hwa; Sohng, Kyu-Ik

    In this paper, we investigated the effect of chromaticity and luminance of surround to decide subject neutral white, and conducted a mathematical model of adapting degree for environment. Factors for adapting degree consist of two parts, adapting degree of ambient chromaticity and color saturation. These can be applied to color appearance models (CAM), actually improve the performance of color matching of CAM, hence would produce the method of image reproduction to general display systems.

  5. Adaptation and perceptual norms in color vision.

    PubMed

    Webster, Michael A; Leonard, Deanne

    2008-11-01

    Many perceptual dimensions are thought to be represented relative to an average value or norm. Models of norm-based coding assume that the norm appears psychologically neutral because it reflects a neutral response in the underlying neural code. We tested this assumption in human color vision by asking how judgments of "white" are affected as neural responses are altered by adaptation. The adapting color was varied to determine the stimulus level that did not bias the observer's subjective white point. This level represents a response norm at the stages at which sensitivity is regulated by the adaptation, and we show that these response norms correspond to the perceptually neutral stimulus and that they can account for how the perception of white varies both across different observers and within the same observer at different locations in the visual field. We also show that individual differences in perceived white are reduced when observers are exposed to a common white adapting stimulus, suggesting that the perceptual differences are due in part to differences in how neural responses are normalized. These results suggest a close link between the norms for appearance and coding in color vision and illustrate a general paradigm for exploring this link in other perceptual domains.

  6. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  7. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  8. Appearance can be deceiving: using appearance models in color imaging

    NASA Astrophysics Data System (ADS)

    Johnson, Garrett M.

    2007-01-01

    As color imaging has evolved through the years, our toolset for understanding has similarly evolved. Research in color difference equations and uniform color spaces spawned tools such as CIELAB, which has had tremendous success over the years. Research on chromatic adaptation and other appearance phenomena then extended CIELAB to form the basis of color appearance models, such as CIECAM02. Color difference equations such as CIEDE2000 evolved to reconcile weaknesses in areas of the CIELAB space. Similarly, models such as S-CIELAB were developed to predict more spatially complex color difference calculations between images. Research in all of these fields is still going strong and there seems to be a trend towards unification of some of the tools, such as calculating color differences in a color appearance space. Along such lines, image appearance models have been developed that attempt to combine all of the above models and metric into one common framework. The goal is to allow the color imaging research to pick and choose the appropriate modeling toolset for their needs. Along these lines, the iCAM image appearance model framework was developed to study a variety of color imaging problems. These include image difference and image quality evaluations as well gamut mapping and high-dynamic range (HDR) rendering. It is important to stress that iCAM was not designed to be a complete color imaging solution, but rather a starting point for unifying models of color appearance, color difference, and spatial vision. As such the choice of model components is highly dependent on the problem being addressed. For example, with CIELAB it clearly evident that it is not necessary to use the associated color difference equations to have great success as a deviceindependent color space. Likewise, it may not be necessary to use the spatial filtering components of an image appearance model when performing image rendering. This paper attempts to shed some light on some of the

  9. Preparing Colorful Astronomical Images II

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.; Frattare, L. M.

    2002-12-01

    We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.

  10. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  11. Feature encoding for color image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Ning; Li, Youfu

    2001-09-01

    An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. It is different from the pervious methods where SOFM is used for construct the feature encoding so that the feature-encoding can self-organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well-suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. Our study shows that the feature encoding approach offers great promise in automating and optimizing color image segmentation.

  12. Image color reduction method for color-defective observers using a color palette composed of 20 particular colors

    NASA Astrophysics Data System (ADS)

    Sakamoto, Takashi

    2015-01-01

    This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.

  13. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  14. High-resolution color images of Io

    NASA Technical Reports Server (NTRS)

    Mcewen, A. S.; Soderblom, L. A.

    1984-01-01

    Color versions of the highest resolution Voyager images of Io were produced by combining the low resolution color images with the high resolution, clear filter images. High resolution versions of the orange, blue, and violet filter images are produced by: orange = high-res clear * low-res orange / low-res clear blue = high-res clear * low-res blue / low-res clear violet = high-res clear * low-res violet / low-res clear. The spectral responses of the high and low resolution clear filter images cancel, leaving the color, while the spatial frequencies of the two low resolution images cancel, leaving the high resolution.

  15. Low color distortion adaptive dimming scheme for power efficient LCDs

    NASA Astrophysics Data System (ADS)

    Nam, Hyoungsik; Song, Eun-Ji

    2013-06-01

    This paper demonstrates the color compensation algorithm to reduce the color distortion caused by mismatches between the reference gamma value of a dimming algorithm and the display gamma values of an LCD panel in a low power adaptive dimming scheme. In 2010, we presented the YrYgYb algorithm, which used the display gamma values extracted from the luminance data of red, green, and blue sub-pixels, Yr, Yg, and Yb, with the simulation results. It was based on the ideal panel model where the color coordinates were maintained at the fixed values over the gray levels. Whereas, this work introduces an XrYgZb color compensation algorithm which obtains the display gamma values of red, green, and blue from the different tri-stimulus data of Xr, Yg, and Zb, to obtain further reduction on the color distortion. Both simulation and measurement results ensure that a XrYgZb algorithm outperforms a previous YrYgYb algorithm. In simulation which has been conducted at the practical model derived from the measured data, the XrYgZb scheme achieves lower maximum and average color difference values of 3.7743 and 0.6230 over 24 test picture images, compared to 4.864 and 0.7156 in the YrYgYb one. In measurement of a 19-inch LCD panel, the XrYgZb method also accomplishes smaller color difference values of 1.444072 and 5.588195 over 49 combinations of red, green, and blue data, compared to 1.50578 and 6.00403 of the YrYgYb at the backlight dimming ratios of 0.85 and 0.4.

  16. Digital color image analysis of core

    SciTech Connect

    Digoggio, R.; Burleigh, K. )

    1990-05-01

    Geologists often identify sands, shales, or UV-fluorescent zones by their color in photos of slabbed core or sidewalls. Similarly, they observe porosity as blue-dyed epoxy in thin sections. Of course, it is difficult to accurately quantify the amount of sand shale, fluorescence, or porosity by eye. With digital images, a computer can quantify the area of an image that is close in shade to a selected color, which is particularly useful for determining net sand or net fluorescence in thinly laminated zones. Digital color photography stores a video image as a large array of numbers (512 {times} 400 {times} 3 colors) in a computer file. With 32 intensity levels each for red, green, and blue, one can distinguish 32,768 different colors. A fluorescent streak or a shale has some natural variation in color that corresponds to hundreds of very similar shades. Thus, to process a digital image, one picks representative shades of some selected feature (e.g., fluorescence). The computer then calculates the eigen values and eigen vectors of the mean-centered covariance matrix of these representative colors. Based on these calculations, it determines which parts of the image have colors similar enough to the representative colors to be considered part of the selected feature. Their results show good agreement with independently measured thin section porosity and with specially prepared images having known amount of a given color.

  17. Epistatic adaptive evolution of human color vision.

    PubMed

    Yokoyama, Shozo; Xing, Jinyi; Liu, Yang; Faggionato, Davide; Altun, Ahmet; Starmer, William T

    2014-12-01

    Establishing genotype-phenotype relationship is the key to understand the molecular mechanism of phenotypic adaptation. This initial step may be untangled by analyzing appropriate ancestral molecules, but it is a daunting task to recapitulate the evolution of non-additive (epistatic) interactions of amino acids and function of a protein separately. To adapt to the ultraviolet (UV)-free retinal environment, the short wavelength-sensitive (SWS1) visual pigment in human (human S1) switched from detecting UV to absorbing blue light during the last 90 million years. Mutagenesis experiments of the UV-sensitive pigment in the Boreoeutherian ancestor show that the blue-sensitivity was achieved by seven mutations. The experimental and quantum chemical analyses show that 4,008 of all 5,040 possible evolutionary trajectories are terminated prematurely by containing a dehydrated nonfunctional pigment. Phylogenetic analysis further suggests that human ancestors achieved the blue-sensitivity gradually and almost exclusively by epistasis. When the final stage of spectral tuning of human S1 was underway 45-30 million years ago, the middle and long wavelength-sensitive (MWS/LWS) pigments appeared and so-called trichromatic color vision was established by interprotein epistasis. The adaptive evolution of human S1 differs dramatically from orthologous pigments with a major mutational effect used in achieving blue-sensitivity in a fish and several mammalian species and in regaining UV vision in birds. These observations imply that the mechanisms of epistatic interactions must be understood by studying various orthologues in different species that have adapted to various ecological and physiological environments. PMID:25522367

  18. Epistatic Adaptive Evolution of Human Color Vision

    PubMed Central

    Yokoyama, Shozo; Xing, Jinyi; Liu, Yang; Faggionato, Davide; Altun, Ahmet; Starmer, William T.

    2014-01-01

    Establishing genotype-phenotype relationship is the key to understand the molecular mechanism of phenotypic adaptation. This initial step may be untangled by analyzing appropriate ancestral molecules, but it is a daunting task to recapitulate the evolution of non-additive (epistatic) interactions of amino acids and function of a protein separately. To adapt to the ultraviolet (UV)-free retinal environment, the short wavelength-sensitive (SWS1) visual pigment in human (human S1) switched from detecting UV to absorbing blue light during the last 90 million years. Mutagenesis experiments of the UV-sensitive pigment in the Boreoeutherian ancestor show that the blue-sensitivity was achieved by seven mutations. The experimental and quantum chemical analyses show that 4,008 of all 5,040 possible evolutionary trajectories are terminated prematurely by containing a dehydrated nonfunctional pigment. Phylogenetic analysis further suggests that human ancestors achieved the blue-sensitivity gradually and almost exclusively by epistasis. When the final stage of spectral tuning of human S1 was underway 45–30 million years ago, the middle and long wavelength-sensitive (MWS/LWS) pigments appeared and so-called trichromatic color vision was established by interprotein epistasis. The adaptive evolution of human S1 differs dramatically from orthologous pigments with a major mutational effect used in achieving blue-sensitivity in a fish and several mammalian species and in regaining UV vision in birds. These observations imply that the mechanisms of epistatic interactions must be understood by studying various orthologues in different species that have adapted to various ecological and physiological environments. PMID:25522367

  19. Epistatic adaptive evolution of human color vision.

    PubMed

    Yokoyama, Shozo; Xing, Jinyi; Liu, Yang; Faggionato, Davide; Altun, Ahmet; Starmer, William T

    2014-12-01

    Establishing genotype-phenotype relationship is the key to understand the molecular mechanism of phenotypic adaptation. This initial step may be untangled by analyzing appropriate ancestral molecules, but it is a daunting task to recapitulate the evolution of non-additive (epistatic) interactions of amino acids and function of a protein separately. To adapt to the ultraviolet (UV)-free retinal environment, the short wavelength-sensitive (SWS1) visual pigment in human (human S1) switched from detecting UV to absorbing blue light during the last 90 million years. Mutagenesis experiments of the UV-sensitive pigment in the Boreoeutherian ancestor show that the blue-sensitivity was achieved by seven mutations. The experimental and quantum chemical analyses show that 4,008 of all 5,040 possible evolutionary trajectories are terminated prematurely by containing a dehydrated nonfunctional pigment. Phylogenetic analysis further suggests that human ancestors achieved the blue-sensitivity gradually and almost exclusively by epistasis. When the final stage of spectral tuning of human S1 was underway 45-30 million years ago, the middle and long wavelength-sensitive (MWS/LWS) pigments appeared and so-called trichromatic color vision was established by interprotein epistasis. The adaptive evolution of human S1 differs dramatically from orthologous pigments with a major mutational effect used in achieving blue-sensitivity in a fish and several mammalian species and in regaining UV vision in birds. These observations imply that the mechanisms of epistatic interactions must be understood by studying various orthologues in different species that have adapted to various ecological and physiological environments.

  20. High Image Quality Laser Color Printer

    NASA Astrophysics Data System (ADS)

    Nagao, Kimitoshi; Morimoto, Yoshinori

    1989-07-01

    A laser color printer has been developed to depict continuous tone color images on a photographic color film or color paper with high resolution and fidelity. We have used three lasers, He-Cd (441.6 nm), Ar4+ (514.5 nm), and He-Ne (632.8 nm) for blue, green, and red exposures. We have employed a drum scanner for two dimensional scanning. The maximum resolution of our system is 40 c/mm (80 lines/mm) and the accuracy of density reproduction is within 1.0 when measured in color difference, where most observers can not distinguish the difference. The scanning artifacts and noise are diminished to a visually negligible level. The image quality of output images compares well to that of actual color photographs, and is suitable for photographic image simulations.

  1. A model of incomplete chromatic adaptation for calculating corresponding colors

    SciTech Connect

    Fairchild, M.D.

    1990-01-01

    A new mathematical model of chromatic adaptation for calculating corresponding colors across changes in illumination is formulated and tested. This model consists of a modified von Kries transform that accounts for incomplete levels of adaptation. The model predicts that adaptation will be less complete as the saturation of the adapting stimulus increases and more complete as the luminance of the adapting stimulus increases. The model is tested with experimental results from two different studies and found to be significantly better at predicting corresponding colors than other proposed models. This model represents a first step toward the specification of color appearance across varying conditions. 30 refs., 3 figs., 1 tab.

  2. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  3. Color image projection based on Fourier holograms.

    PubMed

    Makowski, Michal; Ducin, Izabela; Sypek, Maciej; Siemion, Agnieszka; Siemion, Andrzej; Suszek, Jaroslaw; Kolodziejczyk, Andrzej

    2010-04-15

    A method of color image projection is experimentally validated. It assumes a simultaneous illumination of a spatial light modulator (SLM) with three laser beams converging in a common point on a projection screen. The beams are masked with amplitude filters so that each one illuminates one third of the area of the SLM. A Fourier hologram of a chosen color component of an input image is calculated, and its phase pattern is addressed on a corresponding part of the SLM area. A full-color flat image is formed on the screen as a result of color mixing. Additional techniques of image optimization are applied: time-integral speckle averaging and an off-axis shift of a zero-order peak. Static and animated experimental results of such a color holographic projection with a good image quality are presented.

  4. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  5. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation.

    PubMed

    Hillis, James M; Brainard, David H

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  6. Image-based color ink diffusion rendering.

    PubMed

    Wang, Chung-Ming; Wang, Ren-Jie

    2007-01-01

    This paper proposes an image-based painterly rendering algorithm for automatically synthesizing an image with color ink diffusion. We suggest a mathematical model with a physical base to simulate the phenomenon of color colloidal ink diffusing into absorbent paper. Our algorithm contains three main parts: a feature extraction phase, a Kubelka-Munk (KM) color mixing phase, and a color ink diffusion synthesis phase. In the feature extraction phase, the information of the reference image is simplified by luminance division and color segmentation. In the color mixing phase, the KM theory is employed to approximate the result when one pigment is painted upon another pigment layer. Then, in the color ink diffusion synthesis phase, the physically-based model that we propose is employed to simulate the result of color ink diffusion in absorbent paper using a texture synthesis technique. Our image-based ink diffusing rendering (IBCIDR) algorithm eliminates the drawback of conventional Chinese ink simulations, which are limited to the black ink domain, and our approach demonstrates that, without using any strokes, a color image can be automatically converted to the diffused ink style with a visually pleasing appearance.

  7. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  8. Color filter array demosaicing: an adaptive progressive interpolation based on the edge type

    NASA Astrophysics Data System (ADS)

    Dong, Qiqi; Liu, Zhaohui

    2015-10-01

    Color filter array (CFA) is one of the key points for single-sensor digital cameras to produce color images. Bayer CFA is the most commonly used pattern. In this array structure, the sampling frequency of green is two times of red or blue, which is consistent with the sensitivity of human eyes to colors. However, each sensor pixel only samples one of three primary color values. To render a full-color image, an interpolation process, commonly referred to CFA demosaicing, is required to estimate the other two missing color values at each pixel. In this paper, we explore an adaptive progressive interpolation based on the edge type algorithm. The proposed demosaicing method consists of two successive steps: an interpolation step that estimates missing color values according to various edges and a post-processing step by iterative interpolation.

  9. Measurement and control of color image quality

    NASA Astrophysics Data System (ADS)

    Schneider, Eric; Johnson, Kate; Wolin, David

    1998-12-01

    Color hardcopy output is subject to many of the same image quality concerns as monochrome hardcopy output. Line and dot quality, uniformity, halftone quality, the presence of bands, spots or deletions are just a few by both color and monochrome output. Although measurement of color requires the use of specialized instrumentation, the techniques used to assess color-dependent image quality attributes on color hardcopy output are based on many of the same techniques as those used in monochrome image quality quantification. In this paper we will be presenting several different aspects of color quality assessment in both R and D and production environments. As well as present several examples of color quality measurements that are similar to those currently being used at Hewlett-Packard to characterize color devices and to verify system performance. We will then discuss some important considerations for choosing appropriate color quality measurement equipment for use in either R and D or production environments. Finally, we will discuss the critical relationship between objective measurements and human perception.

  10. Statistical pressure snakes based on color images.

    SciTech Connect

    Schaub, Hanspeter

    2004-05-01

    The traditional mono-color statistical pressure snake was modified to function on a color image with target errors defined in HSV color space. Large variations in target lighting and shading are permitted if the target color is only specified in terms of hue. This method works well with custom targets where the target is surrounded by a color of a very different hue. A significant robustness increase is achieved in the computer vision capability to track a specific target in an unstructured, outdoor environment. By specifying the target color to contain hue, saturation and intensity values, it is possible to establish a reasonably robust method to track general image features of a single color. This method is convenient to allow the operator to select arbitrary targets, or sections of a target, which have a common color. Further, a modification to the standard pixel averaging routine is introduced which allows the target to be specified not only in terms of a single color, but also using a list of colors. These algorithms were tested and verified by using a web camera attached to a personal computer.

  11. Dimensionality of color space in natural images.

    PubMed

    Buades, Antoni; Lisani, Jose-Luis; Morel, Jean-Michel

    2011-02-01

    The color histogram (or color cloud) of a digital image displays the colors present in an image regardless of their spatial location and can be visualized in (R,G,B) coordinates. Therefore, it contains essential information about the structure of colors in natural scenes. The analysis and visual exploration of this structure is difficult. The color cloud being thick, its more dense points are hidden in the clutter. Thus, it is impossible to properly visualize the cloud density. This paper proposes a visualization method that also enables one to validate a general model for color clouds. It argues first by physical arguments that the color cloud must be essentially a two-dimensional (2D) manifold. A color cloud-filtering algorithm is proposed to reveal this 2D structure. A quantitative analysis shows that the reconstructed 2D manifold is strikingly close to the color cloud and only marginally depends on the filtering parameter. Thanks to this algorithm, it is finally possible to visualize the color cloud density as a gray-level function defined on the 2D manifold.

  12. Color Image Segmentation in a Quaternion Framework

    PubMed Central

    Subakan, Özlem N.; Vemuri, Baba C.

    2010-01-01

    In this paper, we present a feature/detail preserving color image segmentation framework using Hamiltonian quaternions. First, we introduce a novel Quaternionic Gabor Filter (QGF) which can combine the color channels and the orientations in the image plane. Using the QGFs, we extract the local orientation information in the color images. Second, in order to model this derived orientation information, we propose a continuous mixture of appropriate hypercomplex exponential basis functions. We derive a closed form solution for this continuous mixture model. This analytic solution is in the form of a spatially varying kernel which, when convolved with the signed distance function of an evolving contour (placed in the color image), yields a detail preserving segmentation. PMID:21243101

  13. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  14. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  15. Adaptive color rendering of maps for users with color vision deficiencies

    NASA Astrophysics Data System (ADS)

    Kvitle, Anne Kristin; Green, Phil; Nussbaum, Peter

    2015-01-01

    A map is an information design object for which canonical colors for the most common elements are well established. For a CVD observer, it may be difficult to discriminate between such elements - for example, it may be hard to distinguish a red road from a green landscape on the basis of color alone. We address this problem through an adaptive color schema in which the conspicuity of elements in a map to the individual user is maximized. This paper outlines a method to perform adaptive color rendering of map information for users with color vision deficiencies. The palette selection method is based on a pseudo-color palette generation technique which constrains colors to those which lie on the boundary of a reference object color gamut. A user performs a color vision discrimination task, and based on the results of the test, a palette of colors is selected using the pseudo-color palette generation method. This ensures that the perceived difference between palette elements is high but which retains the canonical color of well-known elements as far as possible. We show examples of color palettes computed for a selection of normal and CVD observers, together with maps rendered using these palettes.

  16. Color image enhancement based on HVS and MSRCR

    NASA Astrophysics Data System (ADS)

    Xue, Rong kun; Li, Yu feng

    2015-10-01

    Due to inclement weather caused frequently, such as clouds, fog , rain etc. The light intensity on the illuminated objects falls sharply, it make the scenes captured unclear, poor visual quality and low contrast degree. To improve the overall quality of these images, especially the bad illuminated images, the paper propose a new color image enhancement algorithm which is based on multi-scale Retinex theory with color recovering factor (MSRCR) and the human visual system (HVS). It can effectively solve the problem of the color balance of digital images by removing the influence of light and obtain component images reflected the reflex of the object surface, meanwhile, reduce the impact of non-artificial factors and overcome the Ringing effect and human interference. Through comparison and contrast among experiments, that combined evaluated parameters on enhancement image, such as variance, average gradient, sharpness and so forth with the traditional image enhancement methods, such as histogram enhancement, adaptive histogram enhancement, the MSRCR algorithm is proved to be effective in image contrast, detail enhancement and color fidelity, etc.

  17. Multichannel linear predictive coding of color images

    NASA Astrophysics Data System (ADS)

    Maragos, P. A.; Mersereau, R. M.; Schafer, R. W.

    This paper reports on a preliminary study of applying single-channel (scalar) and multichannel (vector) 2-D linear prediction to color image modeling and coding. Also, the novel idea of a multi-input single-output 2-D ADPCM coder is introduced. The results of this study indicate that texture information in multispectral images can be represented by linear prediction coefficients or matrices, whereas the prediction error conveys edge-information. Moreover, by using a single-channel edge-information the investigators obtained, from original color images of 24 bits/pixel, reconstructed images of good quality at information rates of 1 bit/pixel or less.

  18. Image quality and automatic color equalization

    NASA Astrophysics Data System (ADS)

    Chambah, M.; Rizzi, A.; Saint Jean, C.

    2007-01-01

    In the professional movie field, image quality is mainly judged visually. In fact, experts and technicians judge and determine the quality of the film images during the calibration (post production) process. As a consequence, the quality of a restored movie is also estimated subjectively by experts [26,27]. On the other hand, objective quality metrics do not necessarily correlate well with perceived quality [28]. Moreover, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their use in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements of the field [29,25]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. The ACE method, for Automatic Color Equalization [1,2], is an algorithm for digital images unsupervised enhancement. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. We present in this paper is the use of ACE as a basis of a reference free image quality metric. ACE output is an estimate of our visual perception of a scene. The assumption, tested in other papers [3,4], is that ACE enhancing images is in the way our vision system will perceive them, increases their overall perceived quality. The basic idea proposed in this paper, is that ACE output can differ from the input more or less according to the visual quality of the input image In other word, an image appears good if it is near to the visual appearance we (estimate to) have of it. Reversely bad quality images will need "more filtering". Test

  19. Functional photoreceptor loss revealed with adaptive optics: an alternate cause of color blindness.

    PubMed

    Carroll, Joseph; Neitz, Maureen; Hofer, Heidi; Neitz, Jay; Williams, David R

    2004-06-01

    There is enormous variation in the X-linked L/M (long/middle wavelength sensitive) gene array underlying "normal" color vision in humans. This variability has been shown to underlie individual variation in color matching behavior. Recently, red-green color blindness has also been shown to be associated with distinctly different genotypes. This has opened the possibility that there may be important phenotypic differences within classically defined groups of color blind individuals. Here, adaptive optics retinal imaging has revealed a mechanism for producing dichromatic color vision in which the expression of a mutant cone photopigment gene leads to the loss of the entire corresponding class of cone photoreceptor cells. Previously, the theory that common forms of inherited color blindness could be caused by the loss of photoreceptor cells had been discounted. We confirm that remarkably, this loss of one-third of the cones does not impair any aspect of vision other than color.

  20. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  1. How Phoenix Creates Color Images (Animation)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This simple animation shows how a color image is made from images taken by Phoenix.

    The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists.

    By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  2. Outstanding-objects-oriented color image segmentation using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Hayasaka, Rina; Zhao, Jiying; Matsushita, Yutaka

    1997-10-01

    This paper presents a novel fuzzy-logic-based color image segmentation scheme focusing on outstanding objects to human eyes. The scheme first segments the image into rough fuzzy regions, chooses visually significant regions, and conducts fine segmentation on the chosen regions. It can not only reduce the computational load, but also make contour detection easy because the brief object externals has been previously determined. The scheme reflects human sense, and it can be sued efficiently in automatic extraction of image retrieval key, robot vision and region-adaptive image compression.

  3. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural

  4. Color image fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Toet, Alexander

    2003-09-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.

  5. Color structured light imaging of skin

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin; Reichenberg, Jason; Sacks, Michael; Tunnell, James W.

    2016-05-01

    We illustrate wide-field imaging of skin using a structured light (SL) approach that highlights the contrast from superficial tissue scattering. Setting the spatial frequency of the SL in a regime that limits the penetration depth effectively gates the image for photons that originate from the skin surface. Further, rendering the SL images in a color format provides an intuitive format for viewing skin pathologies. We demonstrate this approach in skin pathologies using a custom-built handheld SL imaging system.

  6. Image mosaic with color and brightness correction

    NASA Astrophysics Data System (ADS)

    Zhao, Yili; Xu, Dan; Pan, Zhigeng

    2004-03-01

    Image mosaic is comprised of building a large field of view from a sequence of smaller images. It can be performed by registering, projective warping, resampling and compositing a serials of images. Due to the many possible factors for color and brightness variations when taking images, it is possible to lead to misalignment and obtain poor stitching result. Despite image mosaic can be manually adjusted using some photo editors like PhotoShop, this is not only tedious but also requires skills, knowledge and experience. Automatic adjustment is therefore desirable. By converting images to lαβ space and applying a special statistical analysis, color and brightness correction can be done automatically and improved image mosaic can be obtained.

  7. Color image interpolation using vector rational filters

    NASA Astrophysics Data System (ADS)

    Cheikh, Faouzi A.; Khriji, Lazhar; Gabbouj, Moncef; Ramponi, Giovanni

    1998-04-01

    Rational filters are extended to multichannel signal processing and applied to the image interpolation problem. The proposed nonlinear interpolator exhibits desirable properties, such as, edge and details preservation. In this approach the pixels of the color image are considered as 3-component vectors in the color space. Therefore, the inherent correlation which exists between the different color components is not ignored; thus, leading to better image quality than those obtained by component-wise processing. Simulations show that the resulting edges obtained using vector rational filters (VRF) are free from blockiness and jaggedness, which are usually present in images interpolated using especially linear, but also some nonlinear techniques, e.g. vector median hybrid filters (VFMH).

  8. New approach of color image quantization based on multidimensional directory

    NASA Astrophysics Data System (ADS)

    Chang, Chin-Chen; Su, Yuan-Yuan

    2003-04-01

    Color image quantization is a strategy in which a smaller number of colors are used to represent the image. The objective is to make the quality approximate as closely to the original true-color image. The technology is widely used in non-true-color displays and in color printers that cannot reproduce a large number of different colors. However, the main problem the quantization of color image has to face is how to use less colors to show the color image. Therefore, it is very important to choose one suitable palette for an index color image. In this paper, we shall propose a new approach which employs the concept of Multi-Dimensional Directory (MDD) together with the one cycle LBG algorithm to create a high-quality index color image. Compared with the approaches such as VQ, ISQ, and Photoshop v.5, our approach can not only acquire high quality image but also shorten the operation time.

  9. Motion detection in color image sequence and shadow elimination

    NASA Astrophysics Data System (ADS)

    Shen, Jun

    2004-01-01

    Most of the researches are concentrated on motion detection in gray value image sequences and the methods for motion detection are based on background subtraction or on temporal gray value derivatives. The methods based on background subtraction, including auto-adaptive ones, meet difficulties in presence of illumination changes and of slowly moving objects and need to be re-initialized from time to time. The methods based on temporal derivatives are in general sensible to noise. Color images containing much richer information than the gray value ones, it would be interesting to use them to better detect moving objects. In this paper, we address the problem of motion detection in color image sequences and the problems of illumination changes and shadow elimination. Our motion detection method is based on fuzzy segmentation of the color difference image in help of non-symmetrical π membership functions. The elimination of false moving objects detected due to illumination change is realized by combining the background subtraction method with the temporal derivative method and motion continuity. Shadows are removed by comparing the color of mobile pixels detected in the current frame with that in the precedent frame in HSL color space. Experimental results are reported.

  10. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  11. Color constancy and the natural image

    NASA Technical Reports Server (NTRS)

    Wandall, Brian A.

    1989-01-01

    Color vision is useful only if it is possible to identify an object's color across many viewing contexts. Here, consideration is given to recent results on how to estimate the surface reflectance function of an object from image data, despite (1) uncertainty in the spectral power distribution of the ambient lighting, and (2) uncertainty about what other surfaces will be in the field of view.

  12. Habitual wearers of colored lenses adapt more rapidly to the color changes the lenses produce.

    PubMed

    Engel, Stephen A; Wilkins, Arnold J; Mand, Shivraj; Helwig, Nathaniel E; Allen, Peter M

    2016-08-01

    The visual system continuously adapts to the environment, allowing it to perform optimally in a changing visual world. One large change occurs every time one takes off or puts on a pair of spectacles. It would be advantageous for the visual system to learn to adapt particularly rapidly to such large, commonly occurring events, but whether it can do so remains unknown. Here, we tested whether people who routinely wear spectacles with colored lenses increase how rapidly they adapt to the color shifts their lenses produce. Adaptation to a global color shift causes the appearance of a test color to change. We measured changes in the color that appeared "unique yellow", that is neither reddish nor greenish, as subjects donned and removed their spectacles. Nine habitual wearers and nine age-matched control subjects judged the color of a small monochromatic test light presented with a large, uniform, whitish surround every 5s. Red lenses shifted unique yellow to more reddish colors (longer wavelengths), and greenish lenses shifted it to more greenish colors (shorter wavelengths), consistent with adaptation "normalizing" the appearance of the world. In controls, the time course of this adaptation contained a large, rapid component and a smaller gradual one, in agreement with prior results. Critically, in habitual wearers the rapid component was significantly larger, and the gradual component significantly smaller than in controls. The total amount of adaptation was also larger in habitual wearers than in controls. These data suggest strongly that the visual system adapts with increasing rapidity and strength as environments are encountered repeatedly over time. An additional unexpected finding was that baseline unique yellow shifted in a direction opposite to that produced by the habitually worn lenses. Overall, our results represent one of the first formal reports that adjusting to putting on or taking off spectacles becomes easier over time, and may have important

  13. Color gradient background-oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Mier, Frank Austin; Hargather, Michael J.

    2016-06-01

    Background-oriented schlieren is a method of visualizing refractive disturbances by comparing digital images with and without a refractive disturbance distorting a background pattern. Traditionally, backgrounds consist of random distributions of high-contrast color transitions or speckle patterns. To image a refractive disturbance, a digital image correlation algorithm is used to identify the location and magnitude of apparent pixel shifts in the background pattern between the two images. Here, a novel method of using color gradient backgrounds is explored as an alternative that eliminates the need to perform a complex image correlation between the digital images. A simple image subtraction can be used instead to identify the location, magnitude, and direction of the image distortions. Gradient backgrounds are demonstrated to provide quantitative data only limited by the camera's pixel resolution, whereas speckle backgrounds limit resolution to the size of the random pattern features and image correlation window size. Quantitative measurement of density in a thermal boundary layer is presented. Two-dimensional gradient backgrounds using multiple colors are demonstrated to allow measurement of two-dimensional refractions. A computer screen is used as the background, which allows for rapid modification of the gradient to tune sensitivity for a particular application.

  14. A color image database for realistic image rendition

    NASA Astrophysics Data System (ADS)

    Xiao, Man-jun; Chen, Si-ying; Ni, Guo-qiang; Wen, Yan

    2008-12-01

    A color image database of different scenes under several fixed illuminants is constructed in this paper. It contains images for 45 scenes captured under illumination of various color, lightness. Some analysis based on the database are described to find the relationship between chromatic/lightness distribution of images under different illumination. Indexes as overall mean and SD are introduced, which are reasonable to evaluate image lightness and contrast coordinating to visually representation. In order to objectively assess the influence of illuminant color to images, investigation on lαβand r-g chromatic map are explored. An improved hue correlation method is proposed based on , α βmean/SD statistical analysis, which shows excellent color constancy performance on CPVO measurement. The CPVO measurement is also established on r-g chromatic peak offset tests in this paper.

  15. Retinal Imaging: Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.

    This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.

  16. Color night vision method based on the correlation between natural color and dual band night image

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Bai, Lian-fa; Zhang, Chuang; Chen, Qian; Gu, Guo-hua

    2009-07-01

    Color night vision technology can effectively improve the detection and identification probability. Current color night vision method based on gray scale modulation fusion, spectrum field fusion, special component fusion and world famous NRL method, TNO method will bring about serious color distortion, and the observers will be visual tired after long time observation. Alexander Toet of TNO Human Factors presents a method to fuse multiband night image a natural day time color appearance, but it need the true color image of the scene to be observed. In this paper we put forward a color night vision method based on the correlation between natural color image and dual band night image. Color display is attained through dual-band low light level images and their fusion image. Actual color image of the similar scene is needed to obtain color night vision image, the actual color image is decomposed to three gray-scale images of RGB color module, and the short wave LLL image, long wave LLL image and their fusion image are compared to them through gray-scale spatial correlation method, and the color space mapping scheme is confirmed by correlation. Gray-scale LLL images and their fusion image are adjusted through the variation of HSI color space coefficient, and the coefficient matrix is built. Color display coefficient matrix of LLL night vision system is obtained by multiplying the above coefficient matrix and RGB color space mapping matrix. Emulation experiments on general scene dual-band color night vision indicate that the color display effect is approving. This method was experimented on dual channel dual spectrum LLL color night vision experimental apparatus based on Texas Instruments digital video processing device DM642.

  17. Textured surface identification in noisy color images

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet

    1996-06-01

    Automatic identification of textured surfaces is essential in many imaging applications such as image data compression and scene recognition. In these applications, a vision system is required to detect and identify irregular textures in the noisy color images. This work proposes a method for texture field characterization based on the local textural features. We first divide a given color image into n multiplied by n local windows and extract textural features in each window independently. In this step, the size of a window should be small enough so that each window can include only two texture fields. Separation of texture areas in a local window is first carried out by the Otsu or Kullback threshold selection technique on three color components separately. The 3-D class separation is then performed using the Fisher discriminant. The result of local texture classification is combined by the K-means clustering algorithm. The texture fields detected in a window are characterized by their mean vectors and an element-to-set membership relation. We have experimented with the local feature extraction part of the method using a color image of irregular textures. Results show that the method is effective for capturing the local textural features.

  18. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data

  19. Color gradient background oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Mier, Frank Austin; Hargather, Michael

    2015-11-01

    Background oriented schlieren (BOS) imaging is a method of visualizing refractive disturbances through the comparison of digital images. By comparing images with and without a refractive disturbance visualizations can be achieved via a range of image processing methods. Traditionally, backgrounds consist of random distributions of high contrast speckle patterns. To image a refractive disturbance, a digital image correlation algorithm is used to identify the location and magnitude of apparent pixel shifts in the background pattern. Here a novel method of using color gradient backgrounds is explored as an alternative. The gradient background eliminates the need to perform an image correlation between the two digital images, as simple image subtraction can be used to identify the location, magnitude, and direction of the image distortions. This allows for quicker processing. Two-dimensional gradient backgrounds using multiple colors are shown. The gradient backgrounds are demonstrated to provide quantitative data limited only by the camera's pixel resolution, whereas speckle backgrounds limit resolution to the size of the random pattern features and image correlation window size. Additional results include the use of a computer screen as a background.

  20. [Multispectral image compression algorithms for color reproduction].

    PubMed

    Liang, Wei; Zeng, Ping; Luo, Xue-mei; Wang, Yi-feng; Xie, Kun

    2015-01-01

    In order to improve multispectral images compression efficiency and further facilitate their storage and transmission for the application of color reproduction and so on, in which fields high color accuracy is desired, WF serial methods is proposed, and APWS_RA algorithm is designed. Then the WF_APWS_RA algorithm, which has advantages of low complexity, good illuminant stability and supporting consistent coior reproduction across devices, is presented. The conventional MSE based wavelet embedded coding principle is first studied. And then color perception distortion criterion and visual characteristic matrix W are proposed. Meanwhile, APWS_RA algorithm is formed by optimizing the. rate allocation strategy of APWS. Finally, combined above technologies, a new coding method named WF_APWS_RA is designed. Colorimetric error criterion is used in the algorithm and APWS_RA is applied on visual weighted multispectral image. In WF_APWS_RA, affinity propagation clustering is utilized to exploit spectral correlation of weighted image. Then two-dimensional wavelet transform is used to remove the spatial redundancy. Subsequently, error compensation mechanism and rate pre-allocation are combined to accomplish the embedded wavelet coding. Experimental results show that at the same bit rate, compared with classical coding algorithms, WF serial algorithms have better performance on color retention. APWS_RA preserves least spectral error and WF APWS_RA algorithm has obvious superiority on color accuracy.

  1. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  2. Preparing Colorful Astronomical Images and Illustrations

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.; Frattare, L. M.

    2001-12-01

    We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.

  3. The adaptive value of primate color vision for predator detection.

    PubMed

    Pessoa, Daniel Marques Almeida; Maia, Rafael; de Albuquerque Ajuz, Rafael Cavalcanti; De Moraes, Pedro Zurvaino Palmeira Melo Rosa; Spyrides, Maria Helena Constantino; Pessoa, Valdir Filgueiras

    2014-08-01

    The complex evolution of primate color vision has puzzled biologists for decades. Primates are the only eutherian mammals that evolved an enhanced capacity for discriminating colors in the green-red part of the spectrum (trichromatism). However, while Old World primates present three types of cone pigments and are routinely trichromatic, most New World primates exhibit a color vision polymorphism, characterized by the occurrence of trichromatic and dichromatic females and obligatory dichromatic males. Even though this has stimulated a prolific line of inquiry, the selective forces and relative benefits influencing color vision evolution in primates are still under debate, with current explanations focusing almost exclusively at the advantages in finding food and detecting socio-sexual signals. Here, we evaluate a previously untested possibility, the adaptive value of primate color vision for predator detection. By combining color vision modeling data on New World and Old World primates, as well as behavioral information from human subjects, we demonstrate that primates exhibiting better color discrimination (trichromats) excel those displaying poorer color visions (dichromats) at detecting carnivoran predators against the green foliage background. The distribution of color vision found in extant anthropoid primates agrees with our results, and may be explained by the advantages of trichromats and dichromats in detecting predators and insects, respectively. PMID:24535839

  4. The adaptive value of primate color vision for predator detection.

    PubMed

    Pessoa, Daniel Marques Almeida; Maia, Rafael; de Albuquerque Ajuz, Rafael Cavalcanti; De Moraes, Pedro Zurvaino Palmeira Melo Rosa; Spyrides, Maria Helena Constantino; Pessoa, Valdir Filgueiras

    2014-08-01

    The complex evolution of primate color vision has puzzled biologists for decades. Primates are the only eutherian mammals that evolved an enhanced capacity for discriminating colors in the green-red part of the spectrum (trichromatism). However, while Old World primates present three types of cone pigments and are routinely trichromatic, most New World primates exhibit a color vision polymorphism, characterized by the occurrence of trichromatic and dichromatic females and obligatory dichromatic males. Even though this has stimulated a prolific line of inquiry, the selective forces and relative benefits influencing color vision evolution in primates are still under debate, with current explanations focusing almost exclusively at the advantages in finding food and detecting socio-sexual signals. Here, we evaluate a previously untested possibility, the adaptive value of primate color vision for predator detection. By combining color vision modeling data on New World and Old World primates, as well as behavioral information from human subjects, we demonstrate that primates exhibiting better color discrimination (trichromats) excel those displaying poorer color visions (dichromats) at detecting carnivoran predators against the green foliage background. The distribution of color vision found in extant anthropoid primates agrees with our results, and may be explained by the advantages of trichromats and dichromats in detecting predators and insects, respectively.

  5. Novel calibration and color adaptation schemes in three-fringe RGB photoelasticity

    NASA Astrophysics Data System (ADS)

    Swain, Digendranath; Thomas, Binu P.; Philip, Jeby; Pillai, S. Annamala

    2015-03-01

    Isochromatic demodulation in digital photoelasticity using RGB calibration is a two step process. The first step involves the construction of a look-up table (LUT) from a calibration experiment. In the second step, isochromatic data is demodulated by matching the colors of an analysis image with the colors existing in the LUT. As actual test and calibration experiment tint conditions vary due to different sources, color adaptation techniques for modifying an existing primary LUT are employed. However, the primary LUT is still generated from bending experiments. In this paper, RGB demodulation based on a theoretically constructed LUT has been attempted to exploit the advantages of color adaptation schemes. Thereby, the experimental mode of LUT generation and some uncertainties therein can be minimized. Additionally, a new color adaptation algorithm is proposed using quadratic Lagrangian interpolation polynomials, which is numerically better than the two-point linear interpolations available in the literature. The new calibration and color adaptation schemes are validated and applied to demodulate fringe orders in live models and stress frozen slices.

  6. Color and depth priors in natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2013-06-01

    Natural scene statistics have played an increasingly important role in both our understanding of the function and evolution of the human vision system, and in the development of modern image processing applications. Because range (egocentric distance) is arguably the most important thing a visual system must compute (from an evolutionary perspective), the joint statistics between image information (color and luminance) and range information are of particular interest. It seems obvious that where there is a depth discontinuity, there must be a higher probability of a brightness or color discontinuity too. This is true, but the more interesting case is in the other direction--because image information is much more easily computed than range information, the key conditional probabilities are those of finding a range discontinuity given an image discontinuity. Here, the intuition is much weaker; the plethora of shadows and textures in the natural environment imply that many image discontinuities must exist without corresponding changes in range. In this paper, we extend previous work in two ways--we use as our starting point a very high quality data set of coregistered color and range values collected specifically for this purpose, and we evaluate the statistics of perceptually relevant chromatic information in addition to luminance, range, and binocular disparity information. The most fundamental finding is that the probabilities of finding range changes do in fact depend in a useful and systematic way on color and luminance changes; larger range changes are associated with larger image changes. Second, we are able to parametrically model the prior marginal and conditional distributions of luminance, color, range, and (computed) binocular disparity. Finally, we provide a proof of principle that this information is useful by showing that our distribution models improve the performance of a Bayesian stereo algorithm on an independent set of input images. To summarize

  7. The Influence of a Low-Level Color or Figure Adaptation on a High-Level Face Perception

    NASA Astrophysics Data System (ADS)

    Song, Miao; Shinomori, Keizo; Zhang, Shiyong

    Visual adaptation is a universal phenomenon associated with human visual system. This adaptation affects not only the perception of low-level visual systems processing color, motion, and orientation, but also the perception of high-level visual systems processing complex visual patterns, such as facial identity and expression. Although it remains unclear for the mutual interaction mechanism between systems at different levels, this issue is the key to understand the hierarchical neural coding and computation mechanism. Thus, we examined whether the low-level adaptation influences on the high-level aftereffect by means of cross-level adaptation paradigm (i.e. color, figure adaptation versus facial identity adaptation). We measured the identity aftereffects within the real face test images on real face, color chip and figure adapting conditions. The cross-level mutual influence was evaluated by the aftereffect size among different adapting conditions. The results suggest that the adaptation to color and figure contributes to the high-level facial identity aftereffect. Besides, the real face adaptation obtained the significantly stronger aftereffect than the color chip or the figure adaptation. Our results reveal the possibility of cross-level adaptation propagation and implicitly indicate a high-level holistic facial neural representation. Based on these results, we discussed the theoretical implication of cross-level adaptation propagation for understanding the hierarchical sensory neural systems.

  8. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    NASA Astrophysics Data System (ADS)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  9. AIDA: Adaptive Image Deconvolution Algorithm

    NASA Astrophysics Data System (ADS)

    Hom, Erik; Haase, Sebastian; Marchis, Franck

    2013-10-01

    AIDA is an implementation and extension of the MISTRAL myopic deconvolution method developed by Mugnier et al. (2004) (see J. Opt. Soc. Am. A 21:1841-1854). The MISTRAL approach has been shown to yield object reconstructions with excellent edge preservation and photometric precision when used to process astronomical images. AIDA improves upon the original MISTRAL implementation. AIDA, written in Python, can deconvolve multiple frame data and three-dimensional image stacks encountered in adaptive optics and light microscopic imaging.

  10. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2008-09-02

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  11. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2009-02-24

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  12. Improvements to Color HRSC+OMEGA Image Mosaics of Mars

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Audouard, J.; Dumke, A.; Dunker, T.; Gross, C.; Kneissl, T.; Michael, G.; Ody, A.; Poulet, F.; Schreiner, B.; van Gasselt, S.; Walter, S. H. G.; Wendt, L.; Zuschneid, W.

    2015-10-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express (MEx) orbiter has acquired 3640 images (with 'preliminary level 4' processing as described in [1]) of the Martian surface since arriving in orbit in 2003, covering over 90% of the planet [2]. At resolutions that can reach 10 meters/pixel, these MEx/HRSC images [3-4] are constructed in a pushbroom manner from 9 different CCD line sensors, including a panchromatic nadir-looking (Pan) channel, 4 color channels (R, G, B, IR), and 4 other panchromatic channels for stereo imaging or photometric imaging. In [5], we discussed our first approach towards mosaicking hundreds of the MEx/HRSC RGB or Pan images together. The images were acquired under different atmospheric conditions over the entire mission and under different observation/illumination geometries. Therefore, the main challenge that we have addressed is the color (or gray-scale) matching of these images, which have varying colors (or gray scales) due to the different observing conditions. Using this first approach, our best results for a semiglobal mosaic consist of adding a high-pass-filtered version of the HRSC mosaic to a low-pass-filtered version of the MEx/OMEGA [6] global mosaic. Herein, we will present our latest results using a new, improved, second approach for mosaicking MEx/HRSC images [7], but focusing on the RGB Color processing when using this new second approach. Currently, when the new second approach is applied to Pan images, we match local spatial averages of the Pan images to the local spatial averages of a mosaic made from the images acquired by the Mars Global Surveyor TES bolometer. Since these MGS/TES images have already been atmospherically-corrected, this matching allows us to bootstrap the process of mosaicking the HRSC images without actually atmospherically correcting the HRSC images. In this work, we will adapt this technique of MEx/HRSC Pan images being matched with the MGS/TES mosaic, so that instead, MEx/HRSC RGB images

  13. Cathodoluminescence Imaging Using Nanodiamond Color Centers

    NASA Astrophysics Data System (ADS)

    Glenn, David; Zhang, Huiliang; Kasthuri, Narayanan; Trifonov, Alexei; Schalek, Richard; Lichtman, Jeff; Walsworth, Ronald

    2011-05-01

    We demonstrate a nanoscale imaging technique based on cathodoluminescence (CL) emitted by color centers in nanodiamonds (NDs) under excitation by an electron beam in a scanning electron microscope (SEM). We have identified several classes of color centers that are spectrally distinct at room temperature and can be obtained with high reliability in NDs with diameters on the order of 50 nm or smaller. Compared to standard CL markers, ND color centers are bright and highly stable under SEM excitation. In conjunction with appropriate functionalization of the ND surfaces, ND-CL will provide nanoscale information about molecular function to augment the structural information obtained with standard SEM techniques. We discuss an exciting application of this approach to neuroscience, specifically in the generation of high-resolution maps of the connections between neurons (``Connectomics'').

  14. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  15. Responding to color: the regulation of complementary chromatic adaptation.

    PubMed

    Kehoe, David M; Gutu, Andrian

    2006-01-01

    The acclimation of photosynthetic organisms to changes in light color is ubiquitous and may be best illustrated by the colorful process of complementary chromatic adaptation (CCA). During CCA, cyanobacterial cells change from brick red to bright blue green, depending on their light color environment. The apparent simplicity of this spectacular, photoreversible event belies the complexity of the cellular response to changes in light color. Recent results have shown that the regulation of CCA is also complex and involves at least three pathways. One is controlled by a phytochrome-class photoreceptor that is responsive to green and red light and a complex two-component signal transduction pathway, whereas another is based on sensing redox state. Studies of CCA are uncovering the strategies used by photosynthetic organisms during light acclimation and the means by which they regulate these responses.

  16. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  17. Converting color images to grayscale images by reducing dimensions

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hee; Kim, Byoung-Kwang; Song, Woo-Jin

    2010-05-01

    A novel color-to-grayscale method is presented for converting color images to grayscale images by reducing dimensions. The proposed method converts three-dimensional (3-D) RGB color vectors into one-dimensional (1-D) grayscale values by projecting the 3-D vector into a two-dimensional (2-D) intermediate one followed by compressing the 2-D vector into the 1-D value. Characteristics of color are introduced to facilitate the final determination of the 1-D values in the reducing dimensions. The proposed method has the advantages of preserving chromatic contrasts, maintaining luminance consistency, and having a low computational cost. Furthermore, the proposed method has high resistance to artifacts, such as halos, which can occur when using local contents.

  18. Color Sparse Representations for Image Processing: Review, Models, and Prospects.

    PubMed

    Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I

    2015-11-01

    Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.

  19. The Artist, the Color Copier, and Digital Imaging.

    ERIC Educational Resources Information Center

    Witte, Mary Stieglitz

    The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…

  20. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  1. Color image registration based on quaternion Fourier transformation

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wang, Zhengzhi

    2012-05-01

    The traditional Fourier Mellin transform is applied to quaternion algebra in order to investigate quaternion Fourier transformation properties useful for color image registration in frequency domain. Combining with the quaternion phase correlation, we propose a method for color image registration based on the quaternion Fourier transform. The registration method, which processes color image in a holistic manner, is convenient to realign color images differing in translation, rotation, and scaling. Experimental results on different types of color images indicate that the proposed method not only obtains high accuracy in similarity transform in the image plane but also is computationally efficient.

  2. Autonomous color theme extraction from images using saliency

    NASA Astrophysics Data System (ADS)

    Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.

    2015-03-01

    Color theme (palette) is a collection of color swatches for representing or describing colors in a visual design or an image. Color palettes have broad applications such as serving as means in automatic/semi-automatic design of visual media, as measures in quantifying aesthetics of visual design, and as metrics in image retrieval, image enhancement, and color semantics. In this paper, we suggest an autonomous mechanism for extracting color palettes from an image. Our method is simple and fast, and it works on the notion of visual saliency. By using visual saliency, we extract the fine colors appearing in the foreground along with the various colors in the background regions of an image. Our method accounts for defining different numbers of colors in the palette as well as presenting the proportion of each color according to its visual conspicuity in a given image. This flexibility supports an interactive color palette which may facilitate the designer's color design task. As an application, we present how our extracted color palettes can be utilized as a color similarity metric to enhance the current color semantic based image retrieval techniques.

  3. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  4. Influence of imaging resolution on color fidelity in digital archiving.

    PubMed

    Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari

    2015-11-01

    Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.

  5. Locally tuned inverse sine nonlinear technique for color image enhancement

    NASA Astrophysics Data System (ADS)

    Arigela, Saibabu; Asari, Vijayan K.

    2013-02-01

    In this paper, a novel inverse sine nonlinear transformation based image enhancement technique is proposed to improve the visual quality of images captured in extreme lighting conditions. This method is adaptive, local and simple. The proposed technique consists of four main stages namely histogram adjustment, dynamic range compression, contrast enhancement and nonlinear color restoration. Histogram adjustment on each spectral band is performed to belittle the effect of illumination. Dynamic range compression is accomplished by an inverse sine nonlinear function with a locally tunable image dependent parameter based on the local statistics of each pixel's neighborhood regions of the luminance image. A nonlinear color restoration process based on the chromatic information and luminance of the original image is employed. A statistical quantitative evaluation is performed with the state of the art techniques to analyze and compare the performance of the proposed technique. The proposed technique is also tested on face detection in complex lighting conditions. The results of this technique on images captured in hazy/foggy weather environment are also presented. The evaluation results confirm that the proposed method can be applied to surveillance, security applications in complex lighting environments.

  6. A hybrid and adaptive segmentation method using color and texture information

    NASA Astrophysics Data System (ADS)

    Meurie, C.; Ruichek, Y.; Cohen, A.; Marais, J.

    2010-01-01

    This paper presents a new image segmentation method based on the combination of texture and color informations. The method first computes the morphological color and texture gradients. The color gradient is analyzed taking into account the different color spaces. The texture gradient is computed using the luminance component of the HSL color space. The texture gradient procedure is achieved using a morphological filter and a granulometric and local energy analysis. To overcome the limitations of a linear/barycentric combination, the two morphological gradients are then mixed using a gradient component fusion strategy (to fuse the three components of the color gradient and the unique component of the texture gradient) and an adaptive technique to choose the weighting coefficients. The segmentation process is finally performed by applying the watershed technique using different type of germ images. The segmentation method is evaluated in different object classification applications using the k-means algorithm. The obtained results are compared with other known segmentation methods. The evaluation analysis shows that the proposed method gives better results, especially with hard image acquisition conditions.

  7. Bio-inspired color image enhancement model

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2009-05-01

    Human being can perceive natural scenes very well under various illumination conditions. Partial reasons are due to the contrast enhancement of center/surround networks and opponent analysis on the human retina. In this paper, we propose an image enhancement model to simulate the color processes in the human retina. Specifically, there are two center/surround layers, bipolar/horizontal and ganglion/amacrine; and four color opponents, red (R), green (G), blue (B), and yellow (Y). The central cell (bipolar or ganglion) takes the surrounding information from one or several horizontal or amacrine cells; and bipolar and ganglion both have ON and OFF sub-types. For example, a +R/-G bipolar (red-center- ON/green-surround-OFF) will be excited if only the center is illuminated, or inhibited if only the surroundings (bipolars) are illuminated, or stay neutral if both center and surroundings are illuminated. Likewise, other two color opponents with ON-center/OFF-surround, +G/-R and +B/-Y, follow the same rules. The yellow (Y) channel can be obtained by averaging red and green channels. On the other hand, OFF-center/ON-surround bipolars (i.e., -R/+G and -G/+R, but no - B/+Y) are inhibited when the center is illuminated. An ON-bipolar (or OFF-bipolar) only transfers signals to an ONganglion (or OFF-ganglion), where amacrines provide surrounding information. Ganglion cells have strong spatiotemporal responses to moving objects. In our proposed enhancement model, the surrounding information is obtained using weighted average of neighborhood; excited or inhibited can be implemented with pixel intensity increase or decrease according to a linear or nonlinear response; and center/surround excitations are decided by comparing their intensities. A difference of Gaussian (DOG) model is used to simulate the ganglion differential response. Experimental results using natural scenery pictures proved that, the proposed image enhancement model by simulating the two-layer center

  8. Structure preserving color deconvolution for immunohistochemistry images

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Srinivas, Chukka

    2015-03-01

    Immunohistochemistry (IHC) staining is an important technique for the detection of one or more biomarkers within a single tissue section. In digital pathology applications, the correct unmixing of the tissue image into its individual constituent dyes for each biomarker is a prerequisite for accurate detection and identification of the underlying cellular structures. A popular technique thus far is the color deconvolution method1 proposed by Ruifrok et al. However, Ruifrok's method independently estimates the individual dye contributions at each pixel which potentially leads to "holes and cracks" in the cells in the unmixed images. This is clearly inadequate since strong spatial dependencies exist in the tissue images which contain rich cellular structures. In this paper, we formulate the unmixing algorithm into a least-square framework of image patches, and propose a novel color deconvolution method which explicitly incorporates the spatial smoothness and structure continuity constraint into a neighborhood graph regularizer. An analytical closed-form solution to the cost function is derived for this algorithm for fast implementation. The algorithm is evaluated on a clinical data set containing a number of 3,3-Diaminobenzidine (DAB) and hematoxylin (HTX) stained IHC slides and demonstrates better unmixing results than the existing strategy.

  9. Robust overlay schemes for the fusion of fluorescence and color channels in biological imaging.

    PubMed

    Glatz, Jürgen; Symvoulidis, Panagiotis; Garcia-Allende, P Beatriz; Ntziachristos, Vasilis

    2014-04-01

    Molecular fluorescence imaging is a commonly used method in various biomedical fields and is undergoing rapid translation toward clinical applications. Color images are commonly superimposed with fluorescence measurements to provide orientation, anatomical information, and molecular tissue properties in a single image. New adaptive methods that produce a more robust composite image than conventional lime green alpha blending are presented and demonstrated herein. Moreover, visualization through temporal changes is showcased as an alternative for real-time imaging systems.

  10. Dissociation of equilibrium points for color-discrimination and color-appearance mechanisms in incomplete chromatic adaptation.

    PubMed

    Sato, Tomoharu; Nagai, Takehiro; Kuriki, Ichiro; Nakauchi, Shigeki

    2016-03-01

    We compared the color-discrimination thresholds and supra-threshold color differences (STCDs) obtained in complete chromatic adaptation (gray) and incomplete chromatic adaptation (red). The color-difference profiles were examined by evaluating the perceptual distances between various color pairs using maximum likelihood difference scaling. In the gray condition, the chromaticities corresponding with the smallest threshold and the largest color difference were almost identical. In contrast, in the red condition, they were dissociated. The peaks of the sensitivity functions derived from the color-discrimination thresholds and STCDs along the L-M axis were systematically different between the adaptation conditions. These results suggest that the color signals involved in color discrimination and STCD tasks are controlled by separate mechanisms with different characteristic properties.

  11. Mosaicking of NEAR MSI Color Image Sequences

    NASA Astrophysics Data System (ADS)

    Digilio, J. G.; Robinson, M. S.

    2004-05-01

    Of the over 160,000 frames of 433 Eros captured by the NEAR-Shoemaker spacecraft, 21,936 frames are components of 226 multi-spectral image sequences. As part of the ongoing NEAR Data Analysis Program, we are mosaicking (and delivering via a web interface) all color sequences in two versions: I/F and photometrically normalized I/F (30° incidence, 0° emission). Multi-spectral sets were acquired with varying bandpasses depending on mission constraints, and all sets include 550-nm, 760-nm, and 950-nm (32% of the sequences are all wavelengths except 700-nm clear filter). Resolutions range from 20 m/pixel down to 3.5 m/pixel. To support color analysis and interpretation we are co-registering the highest resolution black and white images to match each of the color mosaics. Due to Eros's highly irregular shape, the scale of a pixel can vary by almost a factor of 2 within a single frame acquired in the 35-km orbit. Thus, map-projecting requires a pixel-by-pixel correction for local topography [1]. Scattered light problems with the NEAR Multi-Spectral Imager (MSI) required the acquisition of ride along zero exposure calibration frames. Without correction, scattered light artifacts within the MSI were larger than the subtle color differences found on Eros [see details in 2]. Successful correction requires that the same region of the surface (within a few pixels) be in the field-of-view of the zero-exposure frame as when the normal frame was acquired. Due to engineering constraints the timing of frame acquisition was not always optimal for the scattered light correction. During the co-registration process we are tracking apparent ground motion during a sequence to estimate the efficacy of the correction, and thus integrity of the color information. Currently several web-based search and browse tools allow interested users to locate individual MSI frames from any spot on the asteroid using various search criteria (cps.earth.northwestern.edu). Final color and BW map products

  12. Color contrast enhancement method of infrared polarization fused image

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Xie, Chen

    2015-10-01

    As the traditional color fusion method based on color transfer algorithm has an issue that the color of target and background is similar. A kind of infrared polarization image color fusion method based on color contrast enhancement was proposed. Firstly the infrared radiation intensity image and the polarization image were color fused, and then color transfer technology was used between color reference image and initial fused image in the YCbCr color space. Secondly Otsu segmentation method was used to extract the target area image from infrared polarization image. Lastly the H,S,I component of the color fusion image which obtained by color transfer was adjusted to obtain the final fused image by using target area in the HSI space. Experimental results show that, the fused result which obtained by the proposed method is rich in detail and makes the contrast of target and background more outstanding. And then the ability of target detection and identification can be improved by the method.

  13. Hyperspectral image analysis using artificial color

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  14. Multiresolution ARMA modeling of facial color images

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Al-Jarrah, Inad

    2002-05-01

    Human face perception is the key to identify confirmation in security systems, video teleconference, picture telephony, and web navigation. Modeling of human faces and facial expressions for different persons can be dealt with by building a point distribution model (PDM) based on spatial (shape) information or a gray-level model (GLM) based on spectral (intensity) information. To avoid short-comings of the local modeling of PDM and GLM, we propose a new approach for recognizing human faces and discriminating expressions associated with them in color images. It is based on the Laplacian of Gaussian (LoG) edge detection, KL transformation, and auto-regressive moving average (ARMA) filtering. First, the KL transform is applied to the R, G, and B dimensions, and a facial image is described by its principal component. A LoG edge-detector is then used for line drawing schematic of a face. The resultant face silhouette is divided into 5 X 5 non-overlapping blocks, each of which is represented by the auto-regressive (AR) parameter vector a. The ensample average of a over the whole image is taken as the feature vector for the description of a facial pattern. Each face class is represented by such ensample average vector a. Efficacy of the ARMA model is evaluated by the non-metric similarity measure S equals a.b/a.b for two facial images whose feature vectors, and a and b, are the ensample average of their ARMA parameters. Our measurements show that the ARMA modeling is effective for discriminating facial features in color images, and has the potential of distinguishing the corresponding facial expressions.

  15. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  16. Color Image Secret Watermarking Erase and Write Algorithm Based on SIFT

    NASA Astrophysics Data System (ADS)

    Qu, Jubao

    The use of adaptive characteristics of SIFT, image features, the implementation of the write, erase operations on Extraction and color image hidden watermarking. From the experimental results, this algorithm has better imperceptibility and at the same time, is robust against geometric attacks and common signal processing.

  17. Efficient color representation for image segmentation under nonwhite illumination

    NASA Astrophysics Data System (ADS)

    Park, Jae Byung

    2003-10-01

    Color image segmentation algorithms often consider object color to be a constant property of an object. If the light source dominantly exhibits a particular color, however, it becomes necessary to consider the color variation induced by the colored illuminant. This paper presents a new approach to segmenting color images that are photographed under non-white illumination conditions. It also addresses how to estimate the color of illuminant in terms of the standard RGB color values rather than the spectrum of the illuminant. With respect to the illumination axis that goes through the origin and the centroid of illuminant color clusters (prior given by the estimation process), the RGB color space is transformed into our new color coordinate system. Our new color scheme shares the intuitiveness of the HSI (HSL or HSV) space that comes from the conical (double-conical or cylindrical) structure of hue and saturation aligned with the intensity variation at its center. It has been developed by locating the ordinary RGB cube in such a way that the illumination axis aligns with the vertical axis (Z-axis) of a larger Cartesian (XYZ) space. The work in this paper uses the dichromatic reflection model [1] to interpret the physics about light and optical effects in color images. The linearity proposed in the dichromatic reflection model is essential and is well preserved in the RGB color space. By proposing a straightforward color model transduction, we suggest dimensionality reduction and provide an efficient way to analyze color images of dielectric objects under non-white illumination conditions. The feasibility of the proposed color representation has been demonstrated by our experiment that is twofold: 1) Segmentation result from a multi-modal histogram-based thresholding technique and 2) Color constancy result from discounting illumination effect further by color balancing.

  18. Acceleration of color computer-generated hologram from RGB-D images using color space conversion

    NASA Astrophysics Data System (ADS)

    Hiyama, Daisuke; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2015-04-01

    We report acceleration of color computer-generated holograms (CGHs) from three dimensional (3D) scenes that are expressed as RGB and depth (D) images. These images are captured by a depth camera and depth buffer of 3D graphics library. RGB and depth images preserve color and depth information of 3D scene, respectively. Then we can regard them as two-dimensional (2D) section images along the depth direction. In general, convolution-based diffraction such as the angular spectrum method is used in calculating CGHs from the 2D section images. However, it takes enormous amount of time because of multiple diffraction calculations. In this paper, we first describe 'band-limited double-step Fresnel diffraction (BL-DSF)' which can accelerate the diffraction calculation than convolution-based diffraction. Next, we describe acceleration of color CGH using color space conversion. Color CGHs are generally calculated on RGB color space; however, we need to perform the same calculations for each color component repeatedly, so that computational cost of color CGH calculation is three times as that of monochrome CGH calculation. Instead, we use YCbCr color space because the 2D section images on YCbCr color space can be down-sampled without deterioration of the image quality.

  19. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  20. Automatic watershed segmentation of randomly textured color images.

    PubMed

    Shafarenko, L; Petrou, M; Kittler, J

    1997-01-01

    A new method is proposed for processing randomly textured color images. The method is based on a bottom-up segmentation algorithm that takes into consideration both color and texture properties of the image. An LUV gradient is introduced, which provides both a color similarity measure and a basis for applying the watershed transform. The patches of watershed mosaic are merged according to their color contrast until a termination criterion is met. This criterion is based on the topology of the typical processed image. The resulting algorithm does not require any additional information, be it various thresholds, marker extraction rules, and suchlike, thus being suitable for automatic processing of color images. The algorithm is demonstrated within the framework of the problem of automatic granite inspection. The segmentation procedure has been found to be very robust, producing good results not only on granite images, but on the wide range of other noisy color images as well, subject to the termination criterion.

  1. Binarization of color document images via luminance and saturation color features.

    PubMed

    Tsai, Chun-Ming; Lee, Hsi-Jian

    2002-01-01

    This paper presents a novel binarization algorithm for color document images. Conventional thresholding methods do not produce satisfactory binarization results for documents with close or mixed foreground colors and background colors. Initially, statistical image features are extracted from the luminance distribution. Then, a decision-tree based binarization method is proposed, which selects various color features to binarize color document images. First, if the document image colors are concentrated within a limited range, saturation is employed. Second, if the image foreground colors are significant, luminance is adopted. Third, if the image background colors are concentrated within a limited range, luminance is also applied. Fourth, if the total number of pixels with low luminance (less than 60) is limited, saturation is applied; else both luminance and saturation are employed. Our experiments include 519 color images, most of which are uniform invoice and name-card document images. The proposed binarization method generates better results than other available methods in shape and connected-component measurements. Also, the binarization method obtains higher recognition accuracy in a commercial OCR system than other comparable methods. PMID:18244645

  2. A color image processing pipeline for digital microscope

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  3. Color reproductivity improvement with additional virtual color filters for WRGB image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2013-02-01

    We have developed a high accuracy color reproduction method based on an estimated spectral reflectance of objects using additional virtual color filters for a wide dynamic range WRGB color filter CMOS image sensor. The four virtual color filters are created by multiplying the spectral sensitivity of White pixel by gauss functions which have different central wave length and standard deviation, and the virtual sensor outputs of those virtual filters are estimated from the four real output signals of the WRGB image sensor. The accuracy of color reproduction was evaluated with a Macbeth Color Checker (MCC), and the averaged value of the color difference ΔEab of 24 colors was 1.88 with our approach.

  4. Mississippi Delta, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The geography of the New Orleans and Mississippi delta region is well shown in this radar image from the Shuttle Radar Topography Mission. In this image, bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is situated along the southern shore of Lake Pontchartrain, the large, roughly circular lake near the center of the image. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest over water highway bridge. Major portions of the city of New Orleans are below sea level, and although it is protected by levees and sea walls, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data

  5. Color Composite Image of the Supernova Remnant

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image is a color composite of the supernova remnant E0102-72: x-ray (blue), optical (green), and radio (red). E0102-72 is the remnant of a star that exploded in a nearby galaxy known as the Small Magellanic Cloud. The star exploded outward at speeds in excess of 20 million kilometers per hour (12 million mph) and collided with surrounding gas. This collision produced two shock waves, or cosmic sonic booms, one traveling outward, and the other rebounding back into the material ejected by the explosion. The radio image, shown in red, was made using the Australia Telescope Compact Array. The radio waves are due to extremely high-energy electrons spiraling around magnetic field lines in the gas and trace the outward moving shock wave. The Chandra X-ray Observatory image, shown in blue, shows gas that has been heated to millions of degrees by the rebounding, or reverse shock wave. The x-ray data show that this gas is rich in oxygen and neon. These elements were created by nuclear reactions inside the star and hurled into space by the supernova. The Hubble Space Telescope optical image, shown in green, shows dense clumps of oxygen gas that have 'cooled' to about 30,000 degrees. Photo Credit: X-ray (NASA/CXC/SAO); optical (NASA/HST): radio: (ACTA)

  6. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  7. Radar Image, Color as Height , Salalah, Oman

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This radar image includes the city of Salalah, the second largest city in Oman. It illustrates how topography determines local climate and, in turn, where people live. This area on the southern coast of the Arabian Peninsula is characterized by a narrow coastal plain (bottom) facing southward into the Arabian Sea, backed by the steep escarpment of the Qara Mountains. The backslope of the Qara Mountains slopes gently into the vast desert of the Empty Quarter (at top). This area is subject to strong monsoonal storms from the Arabian Sea during the summer, when the mountains are enveloped in a sort of perpetual fog. The moisture from the monsoon enables agriculture on the Salalah plain, and also provides moisture for Frankincense trees growing on the desert (north) side of the mountains. In ancient times, incense derived from the sap of the Frankincense tree was the basis for an extremely lucrative trade. Radar and topographic data are used by historians and archaeologists to discover ancient trade routes and other significant ruins.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from green at the lowest elevations to brown at the highest elevations. This image contains about 1070 meters (3500 feet) of total relief. White speckles on the face of some of the mountains are holes in the data caused by steep terrain. These will be filled using coverage from an intersecting pass.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter

  8. Adaptive optical ghost imaging through atmospheric turbulence.

    PubMed

    Shi, Dongfeng; Fan, Chengyu; Zhang, Pengfei; Zhang, Jinghui; Shen, Hong; Qiao, Chunhong; Wang, Yingjian

    2012-12-17

    We demonstrate for the first time (to our knowledge) that a high-quality image can still be obtained in atmospheric turbulence by applying adaptive optical ghost imaging (AOGI) system even when conventional ghost imaging system fails to produce an image. The performance of AOGI under different strength of atmospheric turbulence is investigated by simulation. The influence of adaptive optics system with different numbers of adaptive mirror elements on obtained image quality is also studied.

  9. Fusion Of Edge Maps In Color Images

    NASA Astrophysics Data System (ADS)

    Delcroix, C. J.; Abidi, M. A.

    1988-10-01

    In this paper, a new analytic method for the detection, of edges in color images is presented. This method focuses on the integration of three edge maps in order to increase one's confidence about the presence/absence of edges in a depicted scene. The integration process utilizes an algorithm developed by the authors under a broader research topic: The integration of registered multisensory data. It is based on the interaction between the following two constraints: the principle of existence, which tends to maximize the value of the output edge map at a given location if one input edge map features an edge, and the principle of confirmability, which adjusts this value according to the edge contents in the other input edge map at the same location by maximiz-ing the similarity between them. The latter two maximizations are achieved using the Euler-Language Calculus of Variations equations. This algorithm, which fuses optimally two correlated edge maps with regard to the above principles is extended to the simultaneous fusion of three edge maps. Experiments were conducted using not only the red, green, and blue representation of color information but also other bases.

  10. Tiny Devices Project Sharp, Colorful Images

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  11. Color Doppler imaging of retinal diseases.

    PubMed

    Dimitrova, Galina; Kato, Satoshi

    2010-01-01

    Color Doppler imaging (CDI) is a widely used method for evaluating ocular circulation that has been used in a number of studies on retinal diseases. CDI assesses blood velocity parameters by using ultrasound waves. In ophthalmology, these assessments are mainly performed on the retrobulbar blood vessels: the ophthalmic, the central retinal, and the short posterior ciliary arteries. In this review, we discuss CDI use for the assessment of retinal diseases classified into the following: vascular diseases, degenerations, dystrophies, and detachment. The retinal vascular diseases that have been investigated by CDI include diabetic retinopathy, retinal vein occlusions, retinal artery occlusions, ocular ischemic conditions, and retinopathy of prematurity. Degenerations and dystrophies included in this review are age-related macular degeneration, myopia, and retinitis pigmentosa. CDI has been used for the differential diagnosis of retinal detachment, as well as the evaluation of retrobulbar circulation in this condition. CDI is valuable for research and is a potentially useful diagnostic tool in the clinical setting.

  12. Color Voyager 2 Image Showing Crescent Uranus

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This image shows a crescent Uranus, a view that Earthlings never witnessed until Voyager 2 flew near and then beyond Uranus on January 24, 1986. This planet's natural blue-green color is due to the absorption of redder wavelengths in the atmosphere by traces of methane gas. Uranus' diameter is 32,500 miles, a little over four times that of Earth. The hazy blue-green atmosphere probably extends to a depth of around 5,400 miles, where it rests above what is believed to be an icy or liquid mixture (an 'ocean') of water, ammonia, methane, and other volatiles, which in turn surrounds a rocky core perhaps a little smaller than Earth.

  13. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  14. Tongue color analysis and discrimination based on hyperspectral images.

    PubMed

    Li, Qingli; Liu, Zhi

    2009-04-01

    Human tongue is one of the important organs of the body, which carries abound of information of the health status. Among the various information on tongue, color is the most important factor. Most existing methods carry out pixel-wise or RGB color space classification in a tongue image captured with color CCD cameras. However, these conversional methods impede the accurate analysis on the subjects of tongue surface because of the less information of this kind of images. To address problems in RGB images, a pushbroom hyperspectral tongue imager is developed and its spectral response calibration method is discussed. A new approach to analyze tongue color based on spectra with spectral angle mapper is presented. In addition, 200 hyperspectral tongue images from the tongue image database were selected on which the color recognition is performed with the new method. The results of experiment show that the proposed method has good performance in terms of the rates of correctness for color recognition of tongue coatings and substances. The overall rate of correctness for each color category was 85% of tongue substances and 88% of tongue coatings with the new method. In addition, this algorithm can trace out the color distribution on the tongue surface which is very helpful for tongue disease diagnosis. The spectrum of organism can be used to retrieve organism colors more accurately. This new color analysis approach is superior to the traditional method especially in achieving meaningful areas of substances and coatings of tongue. PMID:19157779

  15. Demosaiced pixel super-resolution for multiplexed holographic color imaging.

    PubMed

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  16. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  17. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-06-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired.

  18. Demosaiced pixel super-resolution for multiplexed holographic color imaging.

    PubMed

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-06-29

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired.

  19. Edge detection, color quantization, segmentation, texture removal, and noise reduction of color image using quaternion iterative filtering

    NASA Astrophysics Data System (ADS)

    Hsiao, Yu-Zhe; Pei, Soo-Chang

    2014-07-01

    Empirical mode decomposition (EMD) is a simple, local, adaptive, and efficient method for nonlinear and nonstationary signal analysis. However, for dealing with multidimensional signals, EMD and its variants such as bidimensional EMD (BEMD) and multidimensional EMD (MEMD) are very slow due to the needs of a large amount of envelope interpolations. Recently, a method called iterative filtering has been proposed. This filtering-based method is not as precise as EMD but its processing speed is very fast and can achieve comparable results as EMD does in many image and signal processing applications. We combine quaternion algebra and iterative filtering to achieve the edge detection, color quantization, segmentation, texture removal, and noise reduction task of color images. We can obtain similar results by using quaternion combined with EMD; however, as mentioned before, EMD is slow and cumbersome. Therefore, we propose to use quaternion iterative filtering as an alternative method for quaternion EMD (QEMD). The edge of color images can be detected by using intrinsic mode functions (IMFs) and the color quantization results can be obtained from residual image. The noise reduction algorithm of our method can be used to deal with Gaussian, salt-and-pepper, speckle noise, etc. The peak signal-to-noise ratio results are satisfactory and the processing speed is also very fast. Since textures in a color image are high-frequency components, we also can use quaternion iterative filtering to decompose a color image into many high- and low-frequency IMFs and remove textures by eliminating high-frequency IMFs.

  20. Edge-preserving image denoising via optimal color space projection.

    PubMed

    Lian, Nai-Xiang; Zagorodnov, Vitali; Tan, Yap-Peng

    2006-09-01

    Denoising of color images can be done on each color component independently. Recent work has shown that exploiting strong correlation between high-frequency content of different color components can improve the denoising performance. We show that for typical color images high correlation also means similarity, and propose to exploit this strong intercolor dependency using an optimal luminance/color-difference space projection. Experimental results confirm that performing denoising on the projected color components yields superior denoising performance, both in peak signal-to-noise ratio and visual quality sense, compared to that of existing solutions. We also develop a novel approach to estimate directly from the noisy image data the image and noise statistics, which are required to determine the optimal projection.

  1. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  2. Adaptive image segmentation by quantization

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Yun, David Y.

    1992-12-01

    Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.

  3. Using color and grayscale images to teach histology to color-deficient medical students.

    PubMed

    Rubin, Lindsay R; Lackey, Wendy L; Kennedy, Frances A; Stephenson, Robert B

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness") in the general population, it is likely that this reliance upon color differentiation poses a significant obstacle for several medical students beginning a course of study that includes examination of histologic slides. In the past, first-year medical students at Michigan State University who identified themselves as color deficient were encouraged to use color transparency overlays or tinted contact lenses to filter out problematic colors. Recently, however, we have offered such students a computer monitor adjusted to grayscale for in-lab work, as well as grayscale copies of color photomicrographs for examination purposes. Grayscale images emphasize the texture of tissues and the contrasts between tissues as the students learn histologic architecture. Using this approach, color-deficient students have quickly learned to compensate for their deficiency by focusing on cell and tissue structure rather than on color variation. Based upon our experience with color-deficient students, we believe that grayscale photomicrographs may also prove instructional for students with normal (trichromatic) color vision, by encouraging them to consider structural characteristics of cells and tissues that may otherwise be overshadowed by stain colors. PMID:19347949

  4. Image reconstruction for hybrid true-color micro-CT.

    PubMed

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2012-06-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid "true-color" micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a "color diffusion" phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose.

  5. Exploring the use of memory colors for image enhancement

    NASA Astrophysics Data System (ADS)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  6. Evaluation of color error and noise on simulated images

    NASA Astrophysics Data System (ADS)

    Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle

    2010-01-01

    The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.

  7. Efficient text segmentation and adaptive color error diffusion for text enhancement

    NASA Astrophysics Data System (ADS)

    Kwon, Jae-Hyun; Park, Tae-Yong; Kim, Yun-Tae; Cho, Yang-Ho; Ha, Yeong-Ho

    2005-01-01

    This paper proposes an adaptive error diffusion algorithm for text enhancement followed by an efficient text segmentation that uses the maximum gradient difference (MGD). The gradients are calculated along with scan lines, then the MGD values are filled within a local window to merge text segments. If the value is above a threshold, the pixel is considered as potential text. Isolated segments are then eliminated in a non-text region filtering process. After the text segmentation, a conventional error diffusion method is applied to the background, while edge enhancement error diffusion is used for the text. Since it is inevitable that visually objectionable artifacts are generated when using two different halftoning algorithms, gradual dilation is proposed to minimize the boundary artifacts in the segmented text blocks before halftoning. Sharpening based on the gradually dilated text region (GDTR) then prevents the printing of successive dots around the text region boundaries. The method is extended to halftone color images to sharpen the text regions. The proposed adaptive error diffusion algorithm involves color halftoning that controls the amount of edge enhancement using a general error filter. However, edge enhancement unfortunately produces color distortion, as edge enhancement and color difference are trade-offs. The multiplicative edge enhancement parameters are selected based on the amount of edge sharpening and color difference. Plus, an additional error factor is introduced to reduce the dot elimination artifact generated by the edge enhancement error diffusion. In experiments, the text of a scanned image was sharper when using the proposed algorithm than with conventional error diffusion without changing the background.

  8. Efficient text segmentation and adaptive color error diffusion for text enhancement

    NASA Astrophysics Data System (ADS)

    Kwon, Jae-Hyun; Park, Tae-Yong; Kim, Yun-Tae; Cho, Yang-Ho; Ha, Yeong-Ho

    2004-12-01

    This paper proposes an adaptive error diffusion algorithm for text enhancement followed by an efficient text segmentation that uses the maximum gradient difference (MGD). The gradients are calculated along with scan lines, then the MGD values are filled within a local window to merge text segments. If the value is above a threshold, the pixel is considered as potential text. Isolated segments are then eliminated in a non-text region filtering process. After the text segmentation, a conventional error diffusion method is applied to the background, while edge enhancement error diffusion is used for the text. Since it is inevitable that visually objectionable artifacts are generated when using two different halftoning algorithms, gradual dilation is proposed to minimize the boundary artifacts in the segmented text blocks before halftoning. Sharpening based on the gradually dilated text region (GDTR) then prevents the printing of successive dots around the text region boundaries. The method is extended to halftone color images to sharpen the text regions. The proposed adaptive error diffusion algorithm involves color halftoning that controls the amount of edge enhancement using a general error filter. However, edge enhancement unfortunately produces color distortion, as edge enhancement and color difference are trade-offs. The multiplicative edge enhancement parameters are selected based on the amount of edge sharpening and color difference. Plus, an additional error factor is introduced to reduce the dot elimination artifact generated by the edge enhancement error diffusion. In experiments, the text of a scanned image was sharper when using the proposed algorithm than with conventional error diffusion without changing the background.

  9. Color accuracy and reproducibility in whole slide imaging scanners

    NASA Astrophysics Data System (ADS)

    Shrestha, Prarthana; Hulsken, Bas

    2014-03-01

    In this paper, we propose a work-flow for color reproduction in whole slide imaging (WSI) scanners such that the colors in the scanned images match to the actual slide color and the inter scanner variation is minimum. We describe a novel method of preparation and verification of the color phantom slide, consisting of a standard IT8- target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several ICC compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color-space. Based on the quality of color reproduction in histopathology tissue slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed work-ow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We measure objective color performance using CIE-DeltaE2000 metric, where DeltaE values below 1 is considered imperceptible. Our evaluation 14 phantom slides, manufactured according to the proposed method, show an average inter-slide color difference below 1 DeltaE. The proposed work-flow is implemented and evaluated in 35 Philips Ultra Fast Scanners (UFS). The results show that the average color difference between a scanner and the reference is 3.5 DeltaE, and among the scanners is 3.1 DeltaE. The improvement on color performance upon using the proposed method is apparent on the visual color quality of the tissues scans.

  10. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images

    PubMed Central

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit

    2015-01-01

    Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571

  11. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  12. Refinement of Colored Mobile Mapping Data Using Intensity Images

    NASA Astrophysics Data System (ADS)

    Yamakawa, T.; Fukano, K.; Onodera, R.; Masuda, H.

    2016-06-01

    Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  13. Spatial imaging in color and HDR: prometheus unchained

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  14. False Color Image of Volcano Sapas Mons

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This false-color image shows the volcano Sapas Mons, which is located in the broad equatorial rise called Atla Regio (8 degrees north latitude and 188 degrees east longitude). The area shown is approximately 650 kilometers (404 miles) on a side. Sapas Mons measures about 400 kilometers (248 miles) across and 1.5 kilometers (0.9 mile) high. Its flanks show numerous overlapping lava flows. The dark flows on the lower right are thought to be smoother than the brighter ones near the central part of the volcano. Many of the flows appear to have been erupted along the flanks of the volcano rather than from the summit. This type of flank eruption is common on large volcanoes on Earth, such as the Hawaiian volcanoes. The summit area has two flat-topped mesas, whose smooth tops give a relatively dark appearance in the radar image. Also seen near the summit are groups of pits, some as large as one kilometer (0.6 mile) across. These are thought to have formed when underground chambers of magma were drained through other subsurface tubes and lead to a collapse at the surface. A 20 kilometer-diameter (12-mile diameter) impact crater northeast of the volcano is partially buried by the lava flows. Little was known about Atla Regio prior to Magellan. The new data, acquired in February 1991, show the region to be composed of at least five large volcanoes such as Sapas Mons, which are commonly linked by complex systems of fractures or rift zones. If comparable to similar features on Earth, Atla Regio probably formed when large volumes of molten rock upwelled from areas within the interior of Venus known as'hot spots.' Magellan is a NASA spacecraft mission to map the surface of Venus with imaging radar. The basic scientific instrument is a synthetic aperture radar, or SAR, which can look through the thick clouds perpetually shielding the surface of Venus. Magellan is in orbit around Venus which completes one turn around its axis in 243 Earth days. That period of time, one Venus day

  15. Uniform color spaces and natural image statistics

    PubMed Central

    McDermott, Kyle C.; Webster, Michael A.

    2011-01-01

    Many aspects of visual coding have been successfully predicted by starting from the statistics of natural scenes and then asking how the stimulus could be efficiently represented. We started from the representation of color characterized by uniform color spaces, and then asked what type of color environment they implied. These spaces are designed to represent equal perceptual differences in color discrimination or appearance by equal distances in the space. The relative sensitivity to different axes within the space might therefore reflect the gamut of colors in natural scenes. To examine this, we projected perceptually uniform distributions within the Munsell, CIEL*u*v* or CIEL*a*b* spaces into cone-opponent space. All were elongated along a bluish-yellowish axis reflecting covarying signals along the L-M and S-L+M cardinal axes, a pattern typical (though not identical) to many natural environments. In turn, color distributions from environments were more uniform when projected into the CIEL*a*b* perceptual space than when represented in a normalized cone-opponent space. These analyses suggest the bluish-yellowish bias in environmental colors might be an important factor shaping chromatic sensitivity, and also suggest that perceptually uniform color metrics could be derived from natural scene statistics and potentially tailored to specific environments. PMID:22330376

  16. Uniform color spaces and natural image statistics.

    PubMed

    McDermott, Kyle C; Webster, Michael A

    2012-02-01

    Many aspects of visual coding have been successfully predicted by starting from the statistics of natural scenes and then asking how the stimulus could be efficiently represented. We started from the representation of color characterized by uniform color spaces, and then asked what type of color environment they implied. These spaces are designed to represent equal perceptual differences in color discrimination or appearance by equal distances in the space. The relative sensitivity to different axes within the space might therefore reflect the gamut of colors in natural scenes. To examine this, we projected perceptually uniform distributions within the Munsell, CIE L(*)u(*)v(*) or CIE L(*)a(*)b(*) spaces into cone-opponent space. All were elongated along a bluish-yellowish axis reflecting covarying signals along the L-M and S-(L+M) cardinal axes, a pattern typical (though not identical) to many natural environments. In turn, color distributions from environments were more uniform when projected into the CIE L(*)a(*)b(*) perceptual space than when represented in a normalized cone-opponent space. These analyses suggest the bluish-yellowish bias in environmental colors might be an important factor shaping chromatic sensitivity, and also suggest that perceptually uniform color metrics could be derived from natural scene statistics and potentially tailored to specific environments.

  17. Objective color classification of ecstasy tablets by hyperspectral imaging.

    PubMed

    Edelman, Gerda; Lopatka, Martin; Aalders, Maurice

    2013-07-01

    The general procedure followed in the examination of ecstasy tablets for profiling purposes includes a color description, which depends highly on the observers' perception. This study aims to provide objective quantitative color information using visible hyperspectral imaging. Both self-manufactured and illicit tablets, created with different amounts of known colorants were analyzed. We derived reflectance spectra from hyperspectral images of these tablets, and successfully determined the most likely colorant used in the production of all self-manufactured tablets and four of five illicit tablets studied. Upon classification, the concentration of the colorant was estimated using a photon propagation model and a single reference measurement of a tablet of known concentration. The estimated concentrations showed a high correlation with the actual values (R(2) = 0.9374). The achieved color information, combined with other physical and chemical characteristics, can provide a powerful tool for the comparison of tablet seizures, which may reveal their origin.

  18. Color image reproduction: the evolution from print to multimedia

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.

    1997-02-01

    The electronic pre-press industry has undergone a very rapid evolution over the past decade, driven by the accelerating performance of desktop computers and affordable application software for image manipulation, page layout and color separation. These have been supported by the steady development of colo scanners, digital cameras, proof printers, RIPs and image setters, all of which make the process of reproducing color images in print easier than ever before. But is color print itself in decline as a medium? New channels of delivery for digital color images include CD-ROM, wideband networks and the Internet, with soft-copy screen display competing with hard-copy print for applications ranging from corporate brochures to home shopping. Present indications are that the most enduring of the graphic arts skills in the new multimedia world will be image rendering and production control rather than those related to photographic film and ink on paper.

  19. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  20. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  1. Colored three-dimensional reconstruction of vehicular thermal infrared images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Leung, Henry; Shen, Zhenyi

    2015-06-01

    Enhancement of vehicular night vision thermal infrared images is an important problem in intelligent vehicles. We propose to create a colorful three-dimensional (3-D) display of infrared images for the vehicular night vision assistant driving system. We combine the plane parameter Markov random field (PP-MRF) model-based depth estimation with classification-based infrared image colorization to perform colored 3-D reconstruction of vehicular thermal infrared images. We first train the PP-MRF model to learn the relationship between superpixel features and plane parameters. The infrared images are then colorized and we perform superpixel segmentation and feature extraction on the colorized images. The PP-MRF model is used to estimate the superpixel plane parameter and to analyze the structure of the superpixels according to the characteristics of vehicular thermal infrared images. Finally, we estimate the depth of each pixel to perform 3-D reconstruction. Experimental results demonstrate that the proposed method can give a visually pleasing and daytime-like colorful 3-D display from a monochromatic vehicular thermal infrared image, which can help drivers to have a better understanding of the environment.

  2. A New Color Image of the Crab Nebula

    NASA Astrophysics Data System (ADS)

    Wainscoat, R. J.; Kormendy, K.

    1997-03-01

    A new color image of the Crab Nebula is presented. This is a $2782 \\times 1904$ pixel mosaic of CCD frames taken through \\B\\ (blue), \\V\\ (green), and \\R\\ (red) filters; it was carefully color balanced so that the Sun would appear white. The resolution of the final image is approximately 0\\farcs8 FWHM. The technique by which this image was constructed is described, and some aspects of the structure of the Crab Nebula revealed by the image are discussed. We also discuss the weaknesses of this technique for producing ``true-color'' images, and describe how our image would differ from what the human eye might see in a very large wide-field telescope. The structure of the inner part of the synchrotron nebula is compared with recent high-resolution images from the {\\it Hubble Space Telescope\\/} and from the Canada-France-Hawaii Telescope. (SECTION: Interstellar Medium and Nebulae)

  3. An improved visual enhancement method for color images

    NASA Astrophysics Data System (ADS)

    Wang, Wenhan; Zhang, Biao

    2013-07-01

    The paper presents an improved method for color image enhancement based on dynamic range compression and shadow compensation method proposed by Felix Albu et al. The improved method consists of not only the dynamic range compression and shadow compensation but also intensity contrast stretching implemented under the logarithmic image processing (LIP) model. The previous method enhance image while preserve image details and color information without generating visual artifacts. On the premise of remaining the advantages of the previous one, our improved method enhances the intensity of the whole image especially the low-light areas. The experimental results illustrate the effectiveness of our proposed method and the superiority over the previous one.

  4. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    PubMed Central

    Ebied, Hala Mousher; Hussein, Ashraf Saad; Tolba, Mohamed Fahmy

    2014-01-01

    This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used. PMID:25254226

  5. White balance in a color imaging device with electrically tunable color filters

    NASA Astrophysics Data System (ADS)

    Langfelder, G.; Zaraga, F.; Longoni, A.

    2009-08-01

    A new method for White Balance, which compensates for changes in the illuminant spectrum by changing accordingly the native chromatic reference system, is presented. A set of base color filters is selected in the sensor, accordingly to the scene illuminant, in order to keep the chromatic components of a white object independent from the illuminant. On the contrary, conventional white balance methods do not change the native color space, but change the chromatic coordinates in order to adjust the white vector direction in the same space. The development in the last ten years of CMOS color sensors for digital imaging whose color reconstruction principle is based on the absorption properties of Silicon, rather than on the presence of color filters, makes the new method applicable in a straightforward manner. An implementation of this method with the Transverse Field Detector, a color pixel with electrically tunable spectral responses is discussed. The experimental results show that this method is effective for scene illuminants ranging from the standard D75 to the standard A (i.e. for scene correlated color temperature from 7500 K to 2850 K). The color reconstruction error specific for each set of electrically selected filters, measured in a perceptive color space after the subsequent color correction, doesn't change significantly in the tested tuning interval.

  6. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  7. Indexing and retrieval of color images using vector quantization

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Huang, Changan

    1999-10-01

    Image and Video indexing is becoming popular with the increasing volumes of visual information that is being stored and transmitted in various multimedia applications. An important focus of the upcoming MPEG 7 standard is on indexing and retrieval of multimedia data. The visual information can be indexed using the spatial (color, texture, shape, sketch, etc.) and temporal (motion, camera operations, etc.) features. Since multimedia data is likely to be stored in compressed form, indexing the information in compressed domain entails savings in compute time and storage space. In this paper, we present a novel indexing and retrieval technique using vector quantization of color images. Color is an important feature for indexing the visual information. Several color based indexing schemes have been reported in the recent literature. Vector Quantization (VQ) is a popular compression technique for low-power applications. Indexing the visual information based on VQ features such as luminance codebook and labels have also been recently presented in the literature. Previous VQ-based indexing techniques describes the entire image content by modeling the histogram of the image without taking into account the location of colors, which may result in unsatisfactory retrieval. We propose to incorporate spatial information in the content representation in VQ-compressed domain. We employ the luminance and chrominance codebooks trained and generated from wavelet-vector-quantized (WVQ) images, in which the images are first decomposed using wavelet transform followed by vector quantization of the transform coefficients. The labels, and the usage maps corresponding to the utilization pattern of codebooks for the individual images serve as indices to the associated color information contained in the images. Hence, the VQ compression parameters serve the purpose of indexing resulting in joint compression and indexing of the color information. Our simulations indicate superior indexing and

  8. Color image based sorter for separating red and white wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A simple imaging system was developed to inspect and sort wheat samples and other grains at moderate feed-rates (30 kernels/s or 3.5 kg/h). A single camera captured color images of three sides of each kernel by using mirrors, and the images were processed using a personal computer (PC). The camer...

  9. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature.

  10. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature. PMID:26618250

  11. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  12. Color image digitization and analysis for drum inspection

    SciTech Connect

    Muller, R.C.; Armstrong, G.A.; Burks, B.L.; Kress, R.L.; Heckendorn, F.M.; Ward, C.R.

    1993-05-01

    A rust inspection system that uses color analysis to find rust spots on drums has been developed. The system is composed of high-resolution color video equipment that permits the inspection of rust spots on the order of 0.25 cm (0.1-in.) in diameter. Because of the modular nature of the system design, the use of open systems software (X11, etc.), the inspection system can be easily integrated into other environmental restoration and waste management programs. The inspection system represents an excellent platform for the integration of other color inspection and color image processing algorithms.

  13. Photographic copy of computer enhanced color photographic image. Photographer and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of computer enhanced color photographic image. Photographer and computer draftsman unknown. Original photographic image located in the office of Modjeski and Masters, Consulting Engineers at 1055 St. Charles Avenue, New Orleans, LA 70130. COMPUTER ENHANCED COLOR PHOTOGRAPH SHOWING THE PROPOSED HUEY P. LONG BRIDGE WIDENING LOOKING FROM THE WEST BANK TOWARD THE EAST BANK. - Huey P. Long Bridge, Spanning Mississippi River approximately midway between nine & twelve mile points upstream from & west of New Orleans, Jefferson, Jefferson Parish, LA

  14. a New Color Correction Method for Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.

    2015-04-01

    Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.

  15. MUNSELL COLOR ANALYSIS OF LANDSAT COLOR-RATIO-COMPOSITE IMAGES OF LIMONITIC AREAS IN SOUTHWEST NEW MEXICO.

    USGS Publications Warehouse

    Kruse, Fred A.

    1984-01-01

    Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.

  16. Investigation of the effects of color on judgments of sweetness using a taste adaptation method.

    PubMed

    Hidaka, Souta; Shimoda, Kazumasa

    2014-01-01

    It has been reported that color can affect the judgment of taste. For example, a dark red color enhances the subjective intensity of sweetness. However, the underlying mechanisms of the effect of color on taste have not been fully investigated; in particular, it remains unclear whether the effect is based on cognitive/decisional or perceptual processes. Here, we investigated the effect of color on sweetness judgments using a taste adaptation method. A sweet solution whose color was subjectively congruent with sweetness was judged as sweeter than an uncolored sweet solution both before and after adaptation to an uncolored sweet solution. In contrast, subjective judgment of sweetness for uncolored sweet solutions did not differ between the conditions following adaptation to a colored sweet solution and following adaptation to an uncolored one. Color affected sweetness judgment when the target solution was colored, but the colored sweet solution did not modulate the magnitude of taste adaptation. Therefore, it is concluded that the effect of color on the judgment of taste would occur mainly in cognitive/decisional domains.

  17. Constrained acquisition of ink spreading curves from printed color images.

    PubMed

    Bugnon, Thomas; Hersch, Roger D

    2011-02-01

    Today's spectral reflection prediction models are able to predict the reflection spectra of printed color images with an accuracy as high as the reproduction variability allows. However, to calibrate such models, special uniform calibration patches need to be printed. These calibration patches use space and have to be removed from the final product. The present contribution shows how to deduce the ink spreading behavior of the color halftones from spectral reflectances acquired within printed color images. Image tiles of a color as uniform as possible are selected within the printed images. The ink spreading behavior is fitted by relying on the spectral reflectances of the selected image tiles. A relevance metric specifies the impact of each ink spreading curve on the selected image tiles. These relevance metrics are used to constrain the corresponding ink spreading curves. Experiments performed on an inkjet printer demonstrate that the new constraint-based calibration of the spectral reflection prediction model performs well when predicting color halftones significantly different from the selected image tiles. For some prints, the proposed image based model calibration is more accurate than a classical calibration.

  18. Solving the color calibration problem of Martian lander images

    NASA Astrophysics Data System (ADS)

    Levin, Ron L.; Levin, Gilbert V.

    2004-02-01

    The color of published Viking and Pathfinder images varies greatly in hue, saturation and chromaticity. True color is important for interpretation of physical, chemical, geological and, possibly, biological information about Mars. The weak link in the imaging process for both missions was the reliance on imaging color charts reflecting Martian ambient light. While the reflectivity of the charts is well known, the spectrum of their illumination on Mars is not. "Calibrated" images are usually reddish, attributed to atmospheric dust, but hues range widely because of the great uncertainty in the illumination spectrum. Solar black body radiation, the same on Mars as on Earth, is minimally modified by the atmosphere of either planet. For red dust to change the spectrum significantly, reflected light must exceed the transmitted light. Were this the case, shadows would be virtually eliminated. Viking images show prominent shadows. Also, Pathfinder"s solar cells, activated by blue light, would have failed under the predominately red spectrum generally attributed to Mars. Accordingly, no consensus has emerged on the colors of the soil, rocks and sky of Mars. This paper proposes two techniques to eliminate color uncertainty from future images, and also to allow recalibration of past images: 1. Calibration of cameras at night through minimal atmospheric paths using light sources brought from Earth, which, used during the day, would permit calculation of red, green and blue intensities independent of scene illumination; 2. Use of hyperspectral imaging to measure the complete spectrum of each pixel. This paper includes a calibration of a NASA Viking lander image based on its color chart as it appears on Earth. The more realistic Martian colors become far more interesting, showing blue skies, brownish soil and rocks, both with yellow, olive, and greenish areas.

  19. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  20. Adaptive Ambient Illumination Based on Color Harmony Model

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ayano; Hirai, Keita; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi

    We investigated the relationship between ambient illumination and psychological effect by applying a modified color harmony model. We verified the proposed model by analyzing correlation between psychological value and modified color harmony score. Experimental results showed the possibility to obtain the best color for illumination using this model.

  1. Combined blazed grating and microlens array for color image sensing

    NASA Astrophysics Data System (ADS)

    Hirano, Tadayuki; Shimatani, Naoko; Kintaka, Kenji; Nishio, Kenzo; Awatsuji, Yasuhiro; Ura, Shogo

    2014-03-01

    A combination of a blazed grating and a microlens array is discussed for high-efficiency color image sensing. Each image segment includes a microlens with blazed grating and three photodiodes assigned to red, green, and blue colors. Color-splitting performances of design examples were simulated by the two-dimensional finite-difference time-domain method. It was found that the spectral characteristics were similar to the ideal NTSC specifications for a segment size of 10 µm with a polymer microlens and a TiO2 blazed grating. A prototype consisting of a honeycomb array of microlenses of 15 µm cell diameter and a TiO2 blaze grating of 1.22 µm period and 0.35 µm height was fabricated and characterized. Power utilization efficiency of about 60% was predicted theoretically and estimated experimentally, which is much higher in comparison to a conventional image sensor utilizing color filters.

  2. Color calculations for and perceptual assessment of computer graphic images

    SciTech Connect

    Meyer, G.W.

    1986-01-01

    Realistic image synthesis involves the modelling of an environment in accordance with the laws of physics and the production of a final simulation that is perceptually acceptable. To be considered a scientific endeavor, synthetic image generation should also include the final step of experimental verification. This thesis concentrates on the color calculations that are inherent in the production of the final simulation and on the perceptual assessment of the computer graphic images that result. The fundamental spectral sensitivity functions that are active in the human visual system are introduced and are used to address color-blindness issues in computer graphics. A digitally controlled color television monitor is employed to successfully implement both the Farnsworth Munsell 100 hues test and a new color vision test that yields more accurate diagnoses. Images that simulate color blind vision are synthesized and are used to evaluate color scales for data display. Gaussian quadrature is used with a set of opponent fundamental to select the wavelengths at which to perform synthetic image generation.

  3. Full-color holographic 3D imaging system using color optical scanning holography

    NASA Astrophysics Data System (ADS)

    Kim, Hayan; Kim, You Seok; Kim, Taegeun

    2016-06-01

    We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.

  4. Weighted color and texture sample selection for image matting.

    PubMed

    Varnousfaderani, Ehsan Shahrian; Rajan, Deepu

    2013-11-01

    Color sampling based matting methods find the best known samples for foreground and background colors of unknown pixels. Such methods do not perform well if there is an overlap in the color distribution of foreground and background regions because color cannot distinguish between these regions and hence, the selected samples cannot reliably estimate the matte. Furthermore, current sampling based matting methods choose samples that are located around the boundaries of foreground and background regions. In this paper, we overcome these two problems. First, we propose texture as a feature that can complement color to improve matting by discriminating between known regions with similar colors. The contribution of texture and color is automatically estimated by analyzing the content of the image. Second, we combine local sampling with a global sampling scheme that prevents true foreground or background samples to be missed during the sample collection stage. An objective function containing color and texture components is optimized to choose the best foreground and background pair among a set of candidate pairs. Experiments are carried out on a benchmark data set and an independent evaluation of the results shows that the proposed method is ranked first among all other image matting methods.

  5. Orientation and spatial frequency selectivity of adaptation to color and luminance gratings.

    PubMed

    Bradley, A; Switkes, E; De Valois, K

    1988-01-01

    Prolonged viewing of sinusoidal luminance gratings produces elevated contrast detection thresholds for test gratings that are similar in spatial frequency and orientation to the adaptation stimulus. We have used this technique to investigate orientation and spatial frequency selectivity in the processing of color contrast information. Adaptation to isoluminant red-green gratings produces elevated color contrast thresholds that are selective for grating orientation and spatial frequency. Only small elevations in color contrast thresholds occur after adaptation to luminance gratings, and vice versa. Although the color adaptation effects appear slightly less selective than those for luminance, our results suggest similar spatial processing of color and luminance contrast patterns by early stages of the human visual system.

  6. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  7. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  8. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  9. An imaging colorimeter for noncontact tissue color mapping.

    PubMed

    Balas, C

    1997-06-01

    There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness. PMID:9151480

  10. Fluorescence lidar multi-color imaging of vegetation

    NASA Technical Reports Server (NTRS)

    Johansson, J.; Wallinder, E.; Edner, H.; Svanberg, S.

    1992-01-01

    Multi-color imaging of vegetation fluorescence following laser excitation is reported for distances of 50 m. A mobile laser radar system equipped with a Nd:YAG laser transmitter and a 40 cm diameter telescope was used. Image processing allows extraction of information related to the physiological status of the vegetation and might prove useful in forest decline research.

  11. False-color composite image of Raco, Michigan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This image is a false color composite of Raco, Michigan, centered at 46.39 north latitude and 84.88 east longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area shown is approximately 20 kilometers by 50 kilometers. Raco is located at the eastern end of Michigan's upper peninsula, west of Sault Ste. Marie and south of Whitefish Bay on Lake Superior. In this color representation, darker areas in the image are smooth surfaces such as frozen lakes and other non-forested areas. The colors are related to the types of trees and the brightness is related to the amount of plant material covering the surface, called forest biomass. The Jet Propulsion Laboratory alternative photo number is P-43882.

  12. Multiple color-image authentication system using HSI color space and QR decomposition in gyrator domains

    NASA Astrophysics Data System (ADS)

    Rafiq Abuturab, Muhammad

    2016-06-01

    A new multiple color-image authentication system based on HSI (Hue-Saturation-Intensity) color space and QR decomposition in gyrator domains is proposed. In this scheme, original color images are converted from RGB (Red-Green-Blue) color spaces to HSI color spaces, divided into their H, S, and I components, and then obtained corresponding phase-encoded components. All the phase-encoded H, S, and I components are individually multiplied, and then modulated by random phase functions. The modulated H, S, and I components are convoluted into a single gray image with asymmetric cryptosystem. The resulting image is segregated into Q and R parts by QR decomposition. Finally, they are independently gyrator transformed to get their encoded parts. The encoded Q and R parts should be gathered without missing anyone for decryption. The angles of gyrator transform afford sensitive keys. The protocol based on QR decomposition of encoded matrix and getting back decoded matrix after multiplying matrices Q and R, enhances the security level. The random phase keys, individual phase keys, and asymmetric phase keys provide high robustness to the cryptosystem. Numerical simulation results demonstrate that this scheme is the superior than the existing techniques.

  13. Local adaptation for body color in Drosophila americana

    PubMed Central

    Wittkopp, P J; Smith-Winberry, G; Arnold, L L; Thompson, E M; Cooley, A M; Yuan, D C; Song, Q; McAllister, B F

    2011-01-01

    Pigmentation is one of the most variable traits within and between Drosophila species. Much of this diversity appears to be adaptive, with environmental factors often invoked as selective forces. Here, we describe the geographic structure of pigmentation in Drosophila americana and evaluate the hypothesis that it is a locally adapted trait. Body pigmentation was quantified using digital images and spectrometry in up to 10 flies from each of 93 isofemale lines collected from 17 locations across the United States and found to correlate most strongly with longitude. Sequence variation at putatively neutral loci showed no evidence of population structure and was inconsistent with an isolation-by-distance model, suggesting that the pigmentation cline exists despite extensive gene flow throughout the species range, and is most likely the product of natural selection. In all other Drosophila species examined to date, dark pigmentation is associated with arid habitats; however, in D. americana, the darkest flies were collected from the most humid regions. To investigate this relationship further, we examined desiccation resistance attributable to an allele that darkens pigmentation in D. americana. We found no significant effect of pigmentation on desiccation resistance in this experiment, suggesting that pigmentation and desiccation resistance are not unequivocally linked in all Drosophila species. PMID:20606690

  14. Color image reproduction based on multispectral and multiprimary imaging: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Teraji, Taishi; Ohsawa, Kenro; Uchiyama, Toshio; Motomura, Hideto; Murakami, Yuri; Ohyama, Nagaaki

    2001-12-01

    Multispectral imaging is significant technology for the acquisition and display of accurate color information. Natural color reproduction under arbitrary illumination becomes possible using spectral information of both image and illumination light. In addition, multiprimary color display, i.e., using more than three primary colors, has been also developed for the reproduction of expanded color gamut, and for discounting observer metamerism. In this paper, we present the concept for the multispectral data interchange for natural color reproduction, and the experimental results using 16-band multispectral camera and 6-primary color display. In the experiment, the accuracy of color reproduction is evaluated in CIE (Delta) Ea*b* for both image capture and display systems. The average and maximum (Delta) Ea*b* = 1.0 and 2.1 in 16-band mutispectral camera system, using Macbeth 24 color patches. In the six-primary color projection display, average and maximum (Delta) Ea*b* = 1.3 and 2.7 with 30 test colors inside the display gamut. Moreover, the color reproduction results with different spectral distributions but same CIE tristimulus value are visually compared, and it is confirmed that the 6-primary display gives improved agreement between the original and reproduced colors.

  15. Peripheral visual response time to colored stimuli imaged on the horizontal meridian

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Gross, M. M.; Nylen, D.; Dawson, L. M.

    1974-01-01

    Two male observers were administered a binocular visual response time task to small (45 min arc), flashed, photopic stimuli at four dominant wavelengths (632 nm red; 583 nm yellow; 526 nm green; 464 nm blue) imaged across the horizontal retinal meridian. The stimuli were imaged at 10 deg arc intervals from 80 deg left to 90 deg right of fixation. Testing followed either prior light adaptation or prior dark adaptation. Results indicated that mean response time (RT) varies with stimulus color. RT is faster to yellow than to blue and green and slowest to red. In general, mean RT was found to increase from fovea to periphery for all four colors, with the curve for red stimuli exhibiting the most rapid positive acceleration with increasing angular eccentricity from the fovea. The shape of the RT distribution across the retina was also found to depend upon the state of light or dark adaptation. The findings are related to previous RT research and are discussed in terms of optimizing the color and position of colored displays on instrument panels.

  16. Color impact in visual attention deployment considering emotional images

    NASA Astrophysics Data System (ADS)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  17. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  18. Optical color-image encryption and synthesis using coherent diffractive imaging in the Fresnel domain.

    PubMed

    Chen, Wen; Chen, Xudong; Sheppard, Colin J R

    2012-02-13

    We propose a new method using coherent diffractive imaging for optical color-image encryption and synthesis in the Fresnel domain. An optical multiple-random-phase-mask encryption system is applied, and a strategy based on lateral translations of a phase-only mask is employed during image encryption. For the decryption, an iterative phase retrieval algorithm is applied to extract high-quality decrypted color images from diffraction intensity maps (i.e., ciphertexts). In addition, optical color-image synthesis is also investigated based on coherent diffractive imaging. Numerical results are presented to demonstrate feasibility and effectiveness of the proposed method. Compared with conventional interference methods, coherent diffractive imaging approach may open up a new research perspective or can provide an effective alternative for optical color-image encryption and synthesis.

  19. SCID: full reference spatial color image quality metric

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Herbin, M.; Zagrouba, E.

    2009-01-01

    The most used full reference image quality assessments are error-based methods. Thus, these measures are performed by pixel based difference metrics like Delta E ( E), MSE, PSNR, etc. Therefore, a local fidelity of the color is defined. However, these metrics does not correlate well with the perceived image quality. Indeed, they omit the properties of the HVS. Thus, they cannot be a reliable predictor of the perceived visual quality. All this metrics compute the differences pixel to pixel. Therefore, a local fidelity of the color is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we present a novel full reference color metric that is based on characteristics of the human visual system by considering the notion of adjacency. This metric called SCID for Spatial Color Image Difference, is more perceptually correlated than other color differences such as Delta E. The suggested full reference metric is generic and independent of image distortion type. It can be used in different application such as: compression, restoration, etc.

  20. Adaptive predictive image coding using local characteristics

    NASA Astrophysics Data System (ADS)

    Hsieh, C. H.; Lu, P. C.; Liou, W. G.

    1989-12-01

    The paper presents an efficient adaptive predictive coding method using the local characteristics of images. In this method, three coding schemes, namely, mean, subsampling combined with fixed DPCM, and ADPCM/PCM, are used and one of these is chosen adaptively based on the local characteristics of images. The prediction parameters of the two-dimensional linear predictor in the ADPCM/PCM are extracted on a block by block basis. Simulation results show that the proposed method is effective in reducing the slope overload distortion and the granular noise at low bit rates, and thus it can improve the visual quality of reconstructed images.

  1. Digital watermarking for color images in hue-saturation-value color space

    NASA Astrophysics Data System (ADS)

    Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.

    2014-05-01

    This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.

  2. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  3. Ocean color products from the Korean Geostationary Ocean Color Imager (GOCI).

    PubMed

    Wang, Menghua; Ahn, Jae-Hyun; Jiang, Lide; Shi, Wei; Son, SeungHyun; Park, Young-Je; Ryu, Joo-Hyung

    2013-02-11

    The first geostationary ocean color satellite sensor, Geostationary Ocean Color Imager (GOCI), which is onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS), was successfully launched in June of 2010. GOCI has a local area coverage of the western Pacific region centered at around 36°N and 130°E and covers ~2500 × 2500 km(2). GOCI has eight spectral bands from 412 to 865 nm with an hourly measurement during daytime from 9:00 to 16:00 local time, i.e., eight images per day. In a collaboration between NOAA Center for Satellite Applications and Research (STAR) and Korea Institute of Ocean Science and Technology (KIOST), we have been working on deriving and improving GOCI ocean color products, e.g., normalized water-leaving radiance spectra (nLw(λ)), chlorophyll-a concentration, diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)), etc. The GOCI-covered ocean region includes one of the world's most turbid and optically complex waters. To improve the GOCI-derived nLw(λ) spectra, a new atmospheric correction algorithm was developed and implemented in the GOCI ocean color data processing. The new algorithm was developed specifically for GOCI-like ocean color data processing for this highly turbid western Pacific region. In this paper, we show GOCI ocean color results from our collaboration effort. From in situ validation analyses, ocean color products derived from the new GOCI ocean color data processing have been significantly improved. Generally, the new GOCI ocean color products have a comparable data quality as those from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua. We show that GOCI-derived ocean color data can provide an effective tool to monitor ocean phenomenon in the region such as tide-induced re-suspension of sediments, diurnal variation of ocean optical and biogeochemical properties, and horizontal advection of river discharge. In particular, we show some examples of ocean

  4. Color Doppler Imaging of Cardiac Catheters Using Vibrating Motors

    PubMed Central

    Reddy, Kalyan E.; Light, Edward D.; Rivera, Danny J.; Kisslo, Joseph A.; Smith, Stephen W.

    2010-01-01

    We attached a miniature motor rotating at 11,000 rpm onto the proximal end of cardiac electrophysiological (EP) catheters in order to produce vibrations at the tip which were then visualized by color Doppler on ultrasound scanners. We imaged the catheter tip within a vascular graft submerged in a water tank using the Volumetrics Medical Imaging 3D scanner, the Siemens Sonoline Antares 2D scanner, and the Philips ie33 3D ultrasound scanner with TEE probe. The vibrating catheter tip was visualized in each case though results varied with the color Doppler properties of the individual scanner. PMID:19514134

  5. Segmentation and tracking of facial regions in color image sequences

    NASA Astrophysics Data System (ADS)

    Menser, Bernd; Wien, Mathias

    2000-05-01

    In this paper a new algorithm for joint detection and segmentation of human faces in color images sequence is presented. A skin probability image is generated using a model for skin color. Instead of a binary segmentation to detect skin regions, connected operators are used to analyze the skin probability image at different threshold levels. A hierarchical scheme of operators using shape and texture simplifies the skin probability image. For the remaining connected components, the likelihood of being a face is estimated using principal components analysis. To track a detected face region through the sequence, the connected component that represent the face in the previous frame is projected into the current frame. Using the projected segment as a marker, connected operators extract the actual face region from the skin probability image.

  6. Two-color ghost imaging with enhanced angular resolving power

    SciTech Connect

    Karmakar, Sanjit; Shih, Yanhua

    2010-03-15

    This article reports an experimental demonstration on nondegenerate, two-color, biphoton ghost imaging which reproduced a ghost image with enhanced angular resolving power by means of a greater field of view compared with that of classical imaging. With the same imaging magnification, the enhanced angular resolving power and field of view compared with those of classical imaging are 1.25:1 and 1.16:1, respectively. The enhancement of angular resolving power depends on the ratio between the idler and the signal photon frequencies, and the enhancement of the field of view depends mainly on the same ratio and also on the distances of the object plane and the imaging lens from the two-photon source. This article also reports the possibility of reproducing a ghost image with the enhancement of the angular resolving power by means of a greater imaging amplification compared with that of classical imaging.

  7. Fixation light hue bias revisited: implications for using adaptive optics to study color vision.

    PubMed

    Hofer, H J; Blaschke, J; Patolia, J; Koenig, D E

    2012-03-01

    Current vision science adaptive optics systems use near infrared wavefront sensor 'beacons' that appear as red spots in the visual field. Colored fixation targets are known to influence the perceived color of macroscopic visual stimuli (Jameson, D., & Hurvich, L. M. (1967). Fixation-light bias: An unwanted by-product of fixation control. Vision Research, 7, 805-809.), suggesting that the wavefront sensor beacon may also influence perceived color for stimuli displayed with adaptive optics. Despite its importance for proper interpretation of adaptive optics experiments on the fine scale interaction of the retinal mosaic and spatial and color vision, this potential bias has not yet been quantified or addressed. Here we measure the impact of the wavefront sensor beacon on color appearance for dim, monochromatic point sources in five subjects. The presence of the beacon altered color reports both when used as a fixation target as well as when displaced in the visual field with a chromatically neutral fixation target. This influence must be taken into account when interpreting previous experiments and new methods of adaptive correction should be used in future experiments using adaptive optics to study color.

  8. Implementation of a multi-spectral color imaging device without color filter array

    NASA Astrophysics Data System (ADS)

    Langfelder, G.; Longoni, A. F.; Zaraga, F.

    2011-01-01

    In this work the use of the Transverse Field Detector (TFD) as a device for multispectral image acquisition is proposed. The TFD is a color imaging pixel capable of color reconstruction without color filters. Its basic working principle is based on the generation of a suitable electric field configuration inside a Silicon depleted region by means of biasing voltages applied to surface contacts. With respect to previously proposed methods for performing multispectral capture, the TFD has a unique characteristic of electrically tunable spectral responses. This feature allows capturing an image with different sets of spectral responses (RGB, R'G'B', and so on) simply by tuning the device biasing voltages in multiple captures. In this way no hardware complexity (no external filter wheels or varying sources) is added with respect to a colorimetric device. The estimation of the spectral reflectance of the area imaged by a TFD pixel is based in this work on a linear combination of six eigenfunctions. It is shown that a spectral reconstruction can be obtained either (1) using two subsequent image captures that generate six TFD spectral responses or (2) using a new asymmetric biasing scheme, which allows the implementation of five spectral responses for each TFD pixel site in a single configuration, definitely allowing one-shot multispectral imaging.

  9. Preparing Colorful Astronomical Images III: Cosmetic Cleaning

    NASA Astrophysics Data System (ADS)

    Frattare, L. M.; Levay, Z. G.

    2003-12-01

    We present cosmetic cleaning techniques for use with mainstream graphics software (Adobe Photoshop) to produce presentation-quality images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope when producing photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to discuss the treatment of various detector-attributed artifacts such as cosmic rays, chip seams, gaps, optical ghosts, diffraction spikes and the like. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to final presentation images. Other pixel-to-pixel applications such as filter smoothing and global noise reduction will be discussed.

  10. Incorporating Adaptive Local Information Into Fuzzy Clustering for Image Segmentation.

    PubMed

    Liu, Guoying; Zhang, Yun; Wang, Aimin

    2015-11-01

    Fuzzy c-means (FCM) clustering with spatial constraints has attracted great attention in the field of image segmentation. However, most of the popular techniques fail to resolve misclassification problems due to the inaccuracy of their spatial models. This paper presents a new unsupervised FCM-based image segmentation method by paying closer attention to the selection of local information. In this method, region-level local information is incorporated into the fuzzy clustering procedure to adaptively control the range and strength of interactive pixels. First, a novel dissimilarity function is established by combining region-based and pixel-based distance functions together, in order to enhance the relationship between pixels which have similar local characteristics. Second, a novel prior probability function is developed by integrating the differences between neighboring regions into the mean template of the fuzzy membership function, which adaptively selects local spatial constraints by a tradeoff weight depending upon whether a pixel belongs to a homogeneous region or not. Through incorporating region-based information into the spatial constraints, the proposed method strengthens the interactions between pixels within the same region and prevents over smoothing across region boundaries. Experimental results over synthetic noise images, natural color images, and synthetic aperture radar images show that the proposed method achieves more accurate segmentation results, compared with five state-of-the-art image segmentation methods.

  11. Adaptive optics imaging of the retina.

    PubMed

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified.

  12. Adaptive optics imaging of the retina

    PubMed Central

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified. PMID:24492503

  13. Fuzzy logic color detection: Blue areas in melanoma dermoscopy images.

    PubMed

    Lingala, Mounika; Stanley, R Joe; Rader, Ryan K; Hagerty, Jason; Rabinovitz, Harold S; Oliviero, Margaret; Choudhry, Iqra; Stoecker, William V

    2014-07-01

    Fuzzy logic image analysis techniques were used to analyze three shades of blue (lavender blue, light blue, and dark blue) in dermoscopic images for melanoma detection. A logistic regression model provided up to 82.7% accuracy for melanoma discrimination for 866 images. With a support vector machines (SVM) classifier, lower accuracy was obtained for individual shades (79.9-80.1%) compared with up to 81.4% accuracy with multiple shades. All fuzzy blue logic alpha cuts scored higher than the crisp case. Fuzzy logic techniques applied to multiple shades of blue can assist in melanoma detection. These vector-based fuzzy logic techniques can be extended to other image analysis problems involving multiple colors or color shades.

  14. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  15. Luminosity and contrast normalization in color retinal images based on standard reference image

    NASA Astrophysics Data System (ADS)

    S. Varnousfaderani, Ehsan; Yousefi, Siamak; Belghith, Akram; Goldbaum, Michael H.

    2016-03-01

    Color retinal images are used manually or automatically for diagnosis and monitoring progression of a retinal diseases. Color retinal images have large luminosity and contrast variability within and across images due to the large natural variations in retinal pigmentation and complex imaging setups. The quality of retinal images may affect the performance of automatic screening tools therefore different normalization methods are developed to uniform data before applying any further analysis or processing. In this paper we propose a new reliable method to remove non-uniform illumination in retinal images and improve their contrast based on contrast of the reference image. The non-uniform illumination is removed by normalizing luminance image using local mean and standard deviation. Then the contrast is enhanced by shifting histograms of uniform illuminated retinal image toward histograms of the reference image to have similar histogram peaks. This process improve the contrast without changing inter correlation of pixels in different color channels. In compliance with the way humans perceive color, the uniform color space of LUV is used for normalization. The proposed method is widely tested on large dataset of retinal images with present of different pathologies such as Exudate, Lesion, Hemorrhages and Cotton-Wool and in different illumination conditions and imaging setups. Results shows that proposed method successfully equalize illumination and enhances contrast of retinal images without adding any extra artifacts.

  16. Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.

    2016-02-01

    Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.

  17. Animal Detection in Natural Images: Effects of Color and Image Database

    PubMed Central

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  18. Multi-color magnetic particle imaging for cardiovascular interventions

    NASA Astrophysics Data System (ADS)

    Haegele, Julian; Vaalma, Sarah; Panagiotopoulos, Nikolaos; Barkhausen, Jörg; Vogt, Florian M.; Borgert, Jörn; Rahmer, Jürgen

    2016-08-01

    Magnetic particle imaging (MPI) uses magnetic fields to visualize the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). Guidance of cardiovascular interventions is seen as one possible application of MPI. To safely guide interventions, the vessel lumen as well as all required interventional devices have to be visualized and be discernible from each other. Until now, different tracer concentrations were used for discerning devices from blood in MPI, because only one type of SPIO could be imaged at a time. Recently, it was shown for 3D MPI that it is possible to separate different signal sources in one volume of interest, i.e. to visualize and discern different SPIOs or different binding states of the same SPIO. The approach was termed multi-color MPI. In this work, the use of multi-color MPI for differentiation of a SPIO coated guide wire (Terumo Radifocus 0.035″) from the lumen of a vessel phantom filled with diluted Resovist is demonstrated. This is achieved by recording dedicated system functions of the coating material containing solid Resovist and of liquid Resovist, which allows separation of their respective signal in the image reconstruction process. Assigning a color to the different signal sources results in a differentiation of guide wire and vessel phantom lumen into colored images.

  19. Color Image of Phoenix Heat Shield and Bounce Mark

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This shows a color image from Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment camera. It shows the Phoenix heat shield and bounce mark on the Mars surface.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  20. Use of discrete chromatic space to tune the image tone in a color image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  1. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  2. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  3. Digital image modification detection using color information and its histograms.

    PubMed

    Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na

    2016-09-01

    The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. PMID:27391780

  4. Offset-sparsity decomposition for automated enhancement of color microscopic image of stained specimen in histopathology

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Hadžija, Marijana Popović; Hadžija, Mirko; Aralica, Gorana

    2015-07-01

    We propose an offset-sparsity decomposition method for the enhancement of a color microscopic image of a stained specimen. The method decomposes vectorized spectral images into offset terms and sparse terms. A sparse term represents an enhanced image, and an offset term represents a "shadow." The related optimization problem is solved by computational improvement of the accelerated proximal gradient method used initially to solve the related rank-sparsity decomposition problem. Removal of an image-adapted color offset yields an enhanced image with improved colorimetric differences among the histological structures. This is verified by a no-reference colorfulness measure estimated from 35 specimens of the human liver, 1 specimen of the mouse liver stained with hematoxylin and eosin, 6 specimens of the mouse liver stained with Sudan III, and 3 specimens of the human liver stained with the anti-CD34 monoclonal antibody. The colorimetric difference improves on average by 43.86% with a 99% confidence interval (CI) of [35.35%, 51.62%]. Furthermore, according to the mean opinion score, estimated on the basis of the evaluations of five pathologists, images enhanced by the proposed method exhibit an average quality improvement of 16.60% with a 99% CI of [10.46%, 22.73%].

  5. Real-time color image fusion for infrared and low-light-level cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Junju; Han, Yiyong; Chang, Benkang; Yuan, Yihui; Qian, Yunsheng; Qiu, Yafeng

    2009-07-01

    A real-time color image fusion system has been presented for the infrared thermal camera and the low-light-level camera, which provides more complete spectral image information. The statistical transform method based on the YCRCBcolor model transfers the first order statistics of the color distribution of a representative natural color daytime reference image to the false color dual -band images. This mapping is usually performed in a perceptually decorrelated color space. The colors in the resulting colorized dual-band images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, security and target detection.

  6. Spatial arrangement of color filter array for multispectral image acquisition

    NASA Astrophysics Data System (ADS)

    Shrestha, Raju; Hardeberg, Jon Y.; Khan, Rahat

    2011-03-01

    In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance. In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.

  7. Restoration of color images by multichannel Kalman filtering

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Chin, Roland T.

    1991-01-01

    A Kalman filter for optimal restoration of multichannel images is presented. This filter is derived using a multichannel semicausal image model that includes between-channel degradation. Both stationary and nonstationary image models are developed. This filter is implemented in the Fourier domain and computation is reduced from O(Lambda3N3M4) to O(Lambda3N3M2) for an M x M N-channel image with degradation length Lambda. Color (red, green, and blue (RGB)) images are used as examples of multichannel images, and restoration in the RGB and YIQ domains is investigated. Simulations are presented in which the effectiveness of this filter is tested for different types of degradation and different image model estimates.

  8. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  9. Venus - False Color Image of Alpha Regio

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This Magellan radar image shows Alpha Regio, a topographic upland approximately 1,300 kilometers (806 miles) across which is centered on 25 degrees south latitude, 4 degrees east longitude. In 1963 Alpha Regio was the first feature on Venus to be identified from Earth based radar. The radar bright area of Alpha Regio is characterized by multiple sets of intersecting trends of structural features such as ridges, troughs and flat floored fault valleys that together form a polygonal outline. Circular to oblong dark patches within the complex terrain are local topographic lows that are filled with smooth volcanic lava. Complex ridged terrains such as Alpha, formerly called 'tessera' in the Soviet Venera 15 and 16 radar missions and the Arecibo radar data, appear to be widespread and common surface expressions of Venusian tectonic processes. Directly south of the complex ridged terrain is a large ovoid shaped feature named Eve. The radar bright spot located centrally within Eve marks the location of the prime meridian of Venus. Magellan radar data reveals that relatively young lava flows emanate from Eve and extends into the southern margin of the ridged terrain at Alpha. The mosaic was produced by Eric de Jong and Myche McAuley in the JPL Multimission Image Processing Laboratory.

  10. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  11. Demonstrating Hormonal Control of Vertebrate Adaptive Color Changes in Vitro.

    ERIC Educational Resources Information Center

    Hadley, Mac E.; Younggren, Newell A.

    1980-01-01

    Presented is a short discussion of factors causing color changes in animals. Also described is an activity which may be used to demonstrate the response of amphibian skin to a melanophore stimulating hormone in high school or college biology classes. (PEB)

  12. 78 FR 18611 - Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-27

    ... HUMAN SERVICES Food and Drug Administration Summit on Color in Medical Imaging; Cosponsored Public... International Color Consortium (ICC) are announcing the following public workshop entitled ``Summit on Color in... Approaches for Dealing with Color in Medical Images.'' The purpose of the workshop is to bring together...

  13. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  14. Role of color and spatial resolution in digital imaging colposcopy

    NASA Astrophysics Data System (ADS)

    Craine, Eric R.; Engel, John R.; Craine, Brian L.

    1990-07-01

    We have developed a practical digital imaging colposcope for use in research on early detection of cancerous and pre-cancerous tissue in the cervix. Several copies of the system have now been used in a variety of clinical and research environments. Two issues of considerable interest which emerged early in our work involved the roles of color and spatial resolution as they applied to digital imaging colposcopy. In each instance these qualities potentially have a significant impact on the diagnostic efficacy of the system. In order to evaluate the role of these parameters we devised and conducted a receiver operating characteristic (ROC) evaluation of the system. It is apparent from these tests that a spatial resolution of 512 x 480 pixel with 7 or 8 bits of contrast is adequate for the task. The more interesting result arises from the study of the use of color in these examinations; it appears that in general, contrary to the widely held perception of the physicians involved, color apparently provides the clinician with little or no diagnostic information. Indeed, in some instances, access to color seemed to confuse the physician and resulted in an elevated rate of false positives. Results of the ROC tests are presented in this paper along with their implications for further development of this imaging modality.

  15. Adaptation and visual search in mammographic images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2015-05-01

    Radiologists face the visually challenging task of detecting suspicious features within the complex and noisy backgrounds characteristic of medical images. We used a search task to examine whether the salience of target features in x-ray mammograms could be enhanced by prior adaptation to the spatial structure of the images. The observers were not radiologists, and thus had no diagnostic training with the images. The stimuli were randomly selected sections from normal mammograms previously classified with BIRADS Density scores of "fatty" versus "dense," corresponding to differences in the relative quantities of fat versus fibroglandular tissue. These categories reflect conspicuous differences in visual texture, with dense tissue being more likely to obscure lesion detection. The targets were simulated masses corresponding to bright Gaussian spots, superimposed by adding the luminance to the background. A single target was randomly added to each image, with contrast varied over five levels so that they varied from difficult to easy to detect. Reaction times were measured for detecting the target location, before or after adapting to a gray field or to random sequences of a different set of dense or fatty images. Observers were faster at detecting the targets in either dense or fatty images after adapting to the specific background type (dense or fatty) that they were searching within. Thus, the adaptation led to a facilitation of search performance that was selective for the background texture. Our results are consistent with the hypothesis that adaptation allows observers to more effectively suppress the specific structure of the background, thereby heightening visual salience and search efficiency.

  16. Multifocus color image fusion based on quaternion curvelet transform.

    PubMed

    Guo, Liqiang; Dai, Ming; Zhu, Ming

    2012-08-13

    Multifocus color image fusion is an active research area in image processing, and many fusion algorithms have been developed. However, the existing techniques can hardly deal with the problem of image blur. This study present a novel fusion approach that integrates the quaternion with traditional curvelet transform to overcome the above disadvantage. The proposed method uses a multiresolution analysis procedure based on the quaternion curvelet transform. Experimental results show that the proposed method is promising, and it does significantly improve the fusion quality compared to the existing fusion methods. PMID:23038524

  17. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    NASA Astrophysics Data System (ADS)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  18. Multiple-spark photography with image separation by color coding.

    PubMed

    Kent, J C

    1969-05-01

    The application of color-coded image separation to multiple-spark photography provides a simple method for recording shadow or schlieren photographs with arbitrary magnification, extremely high frame rate, short exposure time per frame, and no parallax. A color separation, multiple-spark camera is described which produces a sequence of three frames at 5 x magnification with a maximum frame rate of 10(6)/sec and an exposure time per frame of about 0.3 microsec. Standard 10.16-cm x 12.7-cm (4-in. x 5-in.) color film is used. This camera has been useful for observing liquid atomization processes and spray motion, since it enables direct measurement droplet size, location, velocity, and deceleration. PMID:20072366

  19. Need for constraints in component-separable color image processing

    NASA Astrophysics Data System (ADS)

    Thomas, Bruce A.

    1995-03-01

    The component-wise processing of color image data in performed in a variety of applications. These operations are typically carried out using Lookup Table (LUT) based processing techniques, making them well suited for digital implementation. A general exposition of this type of processing is provided, indicating it's remarkable utility along with some of the practical issues that can arise. These motivate a call for the use of constraints in the types of operators that are used during the construction of LUTs. Several particularly useful classes of constrained operators are identified. These lead to an object-oriented approach generalized to operated in a variety of color spaces. The power of this type of framework is then demonstrated via several novel applications in the HSL color space.

  20. Autonomous ship classification using synthetic and real color images

    NASA Astrophysics Data System (ADS)

    Kumlu, Deniz; Jenkins, B. Keith

    2013-03-01

    This work classifies color images of ships attained using cameras mounted on ships and in harbors. Our data-sets contain 9 different types of ship with 18 different perspectives for our training set, development set and testing set. The training data-set contains modeled synthetic images; development and testing data-sets contain real images. The database of real images was gathered from the internet, and 3D models for synthetic images were imported from Google 3D Warehouse. A key goal in this work is to use synthetic images to increase overall classification accuracy. We present a novel approach for autonomous segmentation and feature extraction for this problem. Support vector machine is used for multi-class classification. This work reports three experimental results for multi-class ship classification problem. First experiment trains on a synthetic image data-set and tests on a real image data-set, and obtained accuracy is 87.8%. Second experiment trains on a real image data-set and tests on a separate real image data-set, and obtained accuracy is 87.8%. Last experiment trains on real + synthetic image data-sets (combined data-set) and tests on a separate real image data-set, and obtained accuracy is 93.3%.

  1. Color calibration of a CMOS digital camera for mobile imaging

    NASA Astrophysics Data System (ADS)

    Eliasson, Henrik

    2010-01-01

    As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g., elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration, the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the same time, the demands for efficiency in the production environment require the calibration to be as simple as possible. Thus it is important to find the correct balance between image quality and production efficiency requirements. The purpose of this work is to investigate camera color variations using a simple model where the sensor and IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65 and F11. The sensor variations were determined from a very large population from which 6 corner samples were picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting images generated by the model were then analyzed in the CIELAB space and color errors were calculated using the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that the color temperature dependence is small enough to justify the use of only one light source in a production environment.

  2. Microscale halftone color image analysis: perspective of spectral color prediction modeling

    NASA Astrophysics Data System (ADS)

    Rahaman, G. M. Atiqur; Norberg, Ole; Edström, Per

    2014-01-01

    A method has been proposed, whereby k-means clustering technique is applied to segment microscale single color halftone image into three components—solid ink, ink/paper mixed area and unprinted paper. The method has been evaluated using impact (offset) and non-impact (electro-photography) based single color prints halftoned by amplitude modulation (AM) and frequency modulation (FM) technique. The print samples have also included a range of variations in paper substrates. The colors of segmented regions have been analyzed in CIELAB color space to reveal the variations, in particular those present in mixed regions. The statistics of intensity distribution in the segmented areas have been utilized to derive expressions that can be used to calculate simple thresholds. However, the segmented results have been employed to study dot gain in comparison with traditional estimation technique using Murray-Davies formula. The performance of halftone reflectance prediction by spectral Murray-Davies model has been reported using estimated and measured parameters. Finally, a general idea has been proposed to expand the classical Murray-Davies model based on experimetal observations. Hence, the present study primarily presents the outcome of experimental efforts to characterize halftone print media interactions in respect to the color prediction models. Currently, most regression-based color prediction models rely on mathematical optimization to estimate the parameters using measured average reflectance of a large area compared to the dot size. While this general approach has been accepted as a useful tool, experimental investigations can enhance understanding of the physical processes and facilitate exploration of new modeling strategies. Furthermore, reported findings may help reduce the required number of samples that are printed and measured in the process of multichannel printer characterization and calibration.

  3. Color image classification systems for poultry viscera inspection

    NASA Astrophysics Data System (ADS)

    Chao, Kevin; Chen, Yud-Ren; Early, Howard; Park, Bosoon

    1999-01-01

    A neuro-fuzzy based image classification system that utilizes color-imaging features of poultry viscera in the spectral and spatial domains was developed in this study. Poultry viscera of liver and heart were separated into four classes: normal, airsacculitis, cadaver, and septicemia. Color images for the classified poultry viscera were collected in the poultry process plant. These images in RGB color space were segmented and statistical analysis was performed for feature selection. The neuro-fuzzy system utilizes hybrid paradigms of fuzzy interference system and neural networks to enhance the robustness of the classification processes. The results showed that the accuracy for separation of normal from abnormal livers were 87.5 to 92.5% when two classes of validation data were used. For two-class classification of chicken hearts, the accuracies were 92.5 to 97.5%. When neuro-fuzzy models were employed to separate chicken livers into three classes (normal, airsacculitis, and cadaver), the accuracy was 88.3% for the training data and 83.3% for the validation data. Combining features of chicken liver and heart, a generalized neuro-fuzzy model was designed to classify poultry viscera into four classes (normal, airsacculitis, cadaver, and septicemia). The classification accuracy of 86.3% was achieved for the training data and 82.5% accuracy for the validation.

  4. Colored coded-apertures for spectral image unmixing

    NASA Astrophysics Data System (ADS)

    Vargas, Hector M.; Arguello Fuentes, Henry

    2015-10-01

    Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.

  5. Improved nonlocal fuzzy color segmentation-based color reconstruction hybrid approach for white-RGB imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Chang-shuai; Chong, Jong-wha

    2014-11-01

    Conventional local-based color reconstruction for a white-RGB (WRGB) imaging system relies excessively on reference pixels. Therefore, it is sensitive to noisy interference. To address this issue, we propose an improved nonlocal fuzzy color segmentation-based color reconstruction hybrid approach for a WRGB imaging system. Unlike local-based approaches, we attempt to reproduce color information based on the statistical color distribution of the raw sensor data. According to the distribution analysis, the color distribution (histogram and cumulative distribution function) is close to that of the full-resolution image. However, brief histogram matching gives rise to zipper artifacts, which result from the multicombination of the red, green, and blue corresponding to one white. Therefore, a hybrid color segmentation is proposed to address this issue. The first step is a brief sorting-based color segmentation in the hue channel. Fuzzy-based color segmentation is then utilized to acquire more subregions in the proposed saturation space. Finally, fast histogram matching is carried out to obtain the full-color information for the white pixel for each region. Compared with state-of-the-art approaches, the proposed nonlocal hybrid approach is capable of significantly reducing the influence of noise with higher peak signal-to-noise ratios. Furthermore, according to the hybrid color segmentation, zipper artifacts are successfully avoided.

  6. Digital image fusion systems: color imaging and low-light targets

    NASA Astrophysics Data System (ADS)

    Estrera, Joseph P.

    2009-05-01

    This paper presents digital image fusion (enhanced A+B) systems in color imaging and low light target applications. This paper will discuss first the digital sensors that are utilized in the noted image fusion applications which is a 1900x1086 (high definition format) CMOS imager coupled to a Generation III image intensifier for the visible/near infrared (NIR) digital sensor and 320x240 or 640x480 uncooled microbolometer thermal imager for the long wavelength infrared (LWIR) digital sensor. Performance metrics for these digital imaging sensors will be presented. The digital image fusion (enhanced A+B) process will be presented in context of early fused night vision systems such as the digital image fused system (DIFS) and the digital enhanced night vision goggle and later, the long range digitally fused night vision sighting system. Next, this paper will discuss the effects of user display color in a dual color digital image fusion system. Dual color image fusion schemes such as Green/Red, Cyan/Yellow, and White/Blue for image intensifier and thermal infrared sensor color representation, respectively, are discussed. Finally, this paper will present digitally fused imagery and image analysis of long distance targets in low light from these digital fused systems. The result of this image analysis with enhanced A+B digital image fusion systems is that maximum contrast and spatial resolution is achieved in a digital fusion mode as compared to individual sensor modalities in low light, long distance imaging applications. Paper has been cleared by DoD/OSR for Public Release under Ref: 08-S-2183 on August 8, 2008.

  7. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities.

  8. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities. PMID:25991287

  9. False color image of Safsaf Oasis in southern Egypt

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color image of the uninhabited Safsaf Oasis in southern Egypt near the Egypt/Sudan border. It was produced from data obtained from the L-band and C-band radars that are part of the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar onboard the Shuttle Endeavour on April 9, 1994. The image is centered at 22 degrees North latitude, 29 degrees East longitude. It shows detailed structures of bedrock, and the dark blue sinuous lines are braided channels that occupy part of an old broad river valley. Virtually everything visible on this radar composite image cannot be seen either when standing on the ground or when viewing photographs or satellite images such as Landsat. The Jet Propulsion Laboratory alternative photo number is P-43920.

  10. Synthesis of color filter array pattern in digital images

    NASA Astrophysics Data System (ADS)

    Kirchner, Matthias; Böhme, Rainer

    2009-02-01

    We propose a method to synthetically create or restore typical color filter array (CFA) pattern in digital images. This can be useful, inter alia, to conceal traces of manipulation from forensic techniques that analyze the CFA structure of images. For continuous signals, our solution maintains optimal image quality, using a quadratic cost function; and it can be computed efficiently. Our general approach allows to derive even more efficient approximate solutions that achieve linear complexity in the number of pixels. The effectiveness of the CFA synthesis as tamper-hiding technique and its superior image quality is backed with experimental evidence on large image sets and against state-of-the-art forensic techniques. This exposition is confined to the most relevant 'Bayer'-grid, but the method can be generalized to other layouts as well.

  11. Adaptive evolution of color vision genes in higher primates.

    PubMed

    Shyue, S K; Hewett-Emmett, D; Sperling, H G; Hunt, D M; Bowmaker, J K; Mollon, J D; Li, W H

    1995-09-01

    The intron 4 sequences of the three polymorphic alleles at the X-linked color photo-pigment locus in the squirrel monkey and the marmoset reveal that the alleles in each species are exceptionally divergent. The data further suggest either that each triallelic system has arisen independently in these two New World monkey lineages, or that in each species at least seven deletions and insertions (14 in the two species) in intron 4 have been transferred and homogenized among the alleles by gene conversion or recombination. In either case, the alleles in each species apparently have persisted more than 5 million years and probably have been maintained by overdominant selection.

  12. Quaternion structural similarity: a new quality index for color images.

    PubMed

    Kolaman, Amir; Yadid-Pecht, Orly

    2012-04-01

    One of the most important issues for researchers developing image processing algorithms is image quality. Methodical quality evaluation, by showing images to several human observers, is slow, expensive, and highly subjective. On the other hand, a visual quality matrix (VQM) is a fast, cheap, and objective tool for evaluating image quality. Although most VQMs are good in predicting the quality of an image degraded by a single degradation, they poorly perform for a combination of two degradations. An example for such degradation is the color crosstalk (CTK) effect, which introduces blur with desaturation. CTK is expected to become a bigger issue in image quality as the industry moves toward smaller sensors. In this paper, we will develop a VQM that will be able to better evaluate the quality of an image degraded by a combined blur/desaturation degradation and perform as well as other VQMs on single degradations such as blur, compression, and noise. We show why standard scalar techniques are insufficient to measure a combined blur/desaturation degradation and explain why a vectorial approach is better suited. We introduce quaternion image processing (QIP), which is a true vectorial approach and has many uses in the fields of physics and engineering. Our new VQM is a vectorial expansion of structure similarity using QIP, which gave it its name-Quaternion Structural SIMilarity (QSSIM). We built a new database of a combined blur/desaturation degradation and conducted a quality survey with human subjects. An extensive comparison between QSSIM and other VQMs on several image quality databases-including our new database-shows the superiority of this new approach in predicting visual quality of color images.

  13. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  14. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  15. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  16. Validation of tablet-based evaluation of color fundus images

    PubMed Central

    Christopher, Mark; Moga, Daniela C.; Russell, Stephen R.; Folk, James C.; Scheetz, Todd; Abràmoff, Michael D.

    2012-01-01

    Purpose To compare diabetic retinopathy (DR) referral recommendations made by viewing fundus images using a tablet computer to recommendations made using a standard desktop display. Methods A tablet computer (iPad) and a desktop PC with a high-definition color display were compared. For each platform, two retinal specialists independently rated 1200 color fundus images from patients at risk for DR using an annotation program, Truthseeker. The specialists determined whether each image had referable DR, and also how urgently each patient should be referred for medical examination. Graders viewed and rated the randomly presented images independently and were masked to their ratings on the alternative platform. Tablet- and desktop display-based referral ratings were compared using cross-platform, intra-observer kappa as the primary outcome measure. Additionally, inter-observer kappa, sensitivity, specificity, and area under ROC (AUC) were determined. Results A high level of cross-platform, intra-observer agreement was found for the DR referral ratings between the platforms (κ=0.778), and for the two graders, (κ=0.812). Inter-observer agreement was similar for the two platforms (κ=0.544 and κ=0.625 for tablet and desktop, respectively). The tablet-based ratings achieved a sensitivity of 0.848, a specificity of 0.987, and an AUC of 0.950 compared to desktop display-based ratings. Conclusions In this pilot study, tablet-based rating of color fundus images for subjects at risk for DR was consistent with desktop display-based rating. These results indicate that tablet computers can be reliably used for clinical evaluation of fundus images for DR. PMID:22495326

  17. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    PubMed

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided. PMID:21934823

  18. Ecological genetics of adaptive color polymorphism in pocket mice: geographic variation in selected and neutral genes.

    PubMed

    Hoekstra, Hopi E; Drumm, Kristen E; Nachman, Michael W

    2004-06-01

    Patterns of geographic variation in phenotype or genotype may provide evidence for natural selection. Here, we compare phenotypic variation in color, allele frequencies of a pigmentation gene (the melanocortin-1 receptor, Mc1r), and patterns of neutral mitochondrial DNA (mtDNA) variation in rock pocket mice (Chaetodipus intermedius) across a habitat gradient in southern Arizona. Pocket mice inhabiting volcanic lava have dark coats with unbanded, uniformly melanic hairs, whereas mice from nearby light-colored granitic rocks have light coats with banded hairs. This color polymorphism is a presumed adaptation to avoid predation. Previous work has demonstrated that two Mc1r alleles, D and d, differ by four amino acids, and are responsible for the color polymorphism: DD and Dd genotypes are melanic whereas dd genotypes are light colored. To determine the frequency of the two Mc1r allelic classes across the dark-colored lava and neighboring light-colored granite, we sequenced the Mc1r gene in 175 individuals from a 35-km transect in the Pinacate lava region. We also sequenced two neutral mtDNA genes, COIII and ND3, in the same individuals. We found a strong correlation between Mc1r allele frequency and habitat color and no correlation between mtDNA markers and habitat color. Using estimates of migration from mtDNA haplotypes between dark- and light-colored sampling sites and Mc1r allele frequencies at each site, we estimated selection coefficients against mismatched Mc1r alleles, assuming a simple model of migration-selection balance. Habitat-dependent selection appears strong but asymmetric: selection is stronger against light mice on dark rock than against melanic mice on light rock. Together these results suggest that natural selection acts to match pocket mouse coat color to substrate color, despite high levels of gene flow between light and melanic populations.

  19. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  20. Color-matched esophagus phantom for fluorescent imaging

    NASA Astrophysics Data System (ADS)

    Yang, Chenying; Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.

    2013-02-01

    We developed a stable, reproducible three-dimensional optical phantom for the evaluation of a wide-field endoscopic molecular imaging system. This phantom mimicked a human esophagus structure with flexibility to demonstrate body movements. At the same time, realistic visual appearance and diffuse spectral reflectance properties of the tissue were simulated by a color matching methodology. A photostable dye-in-polymer technology was applied to represent biomarker probed "hot-spot" locations. Furthermore, fluorescent target quantification of the phantom was demonstrated using a 1.2mm ultrathin scanning fiber endoscope with concurrent fluorescence-reflectance imaging.

  1. The Athena Pancam and Color Microscopic Imager (CMI)

    NASA Technical Reports Server (NTRS)

    Bell, J. F., III; Herkenhoff, K. E.; Schwochert, M.; Morris, R. V.; Sullivan, R.

    2000-01-01

    The Athena Mars rover payload includes two primary science-grade imagers: Pancam, a multispectral, stereo, panoramic camera system, and the Color Microscopic Imager (CMI), a multispectral and variable depth-of-field microscope. Both of these instruments will help to achieve the primary Athena science goals by providing information on the geology, mineralogy, and climate history of the landing site. In addition, Pancam provides important support for rover navigation and target selection for Athena in situ investigations. Here we describe the science goals, instrument designs, and instrument performance of the Pancam and CMI investigations.

  2. Shear wave transmissivity measurement by color Doppler shear wave imaging

    NASA Astrophysics Data System (ADS)

    Yamakoshi, Yoshiki; Yamazaki, Mayuko; Kasahara, Toshihiro; Sunaguchi, Naoki; Yuminaka, Yasushi

    2016-07-01

    Shear wave elastography is a useful method for evaluating tissue stiffness. We have proposed a novel shear wave imaging method (color Doppler shear wave imaging: CD SWI), which utilizes a signal processing unit in ultrasound color flow imaging in order to detect the shear wave wavefront in real time. Shear wave velocity is adopted to characterize tissue stiffness; however, it is difficult to measure tissue stiffness with high spatial resolution because of the artifact produced by shear wave diffraction. Spatial average processing in the image reconstruction method also degrades the spatial resolution. In this paper, we propose a novel measurement method for the shear wave transmissivity of a tissue boundary. Shear wave wavefront maps are acquired by changing the displacement amplitude of the shear wave and the transmissivity of the shear wave, which gives the difference in shear wave velocity between two mediums separated by the boundary, is measured from the ratio of two threshold voltages required to form the shear wave wavefronts in the two mediums. From this method, a high-resolution shear wave amplitude imaging method that reconstructs a tissue boundary is proposed.

  3. Data Hiding Scheme on Medical Image using Graph Coloring

    NASA Astrophysics Data System (ADS)

    Astuti, Widi; Adiwijaya; Novia Wisety, Untari

    2015-06-01

    The utilization of digital medical images is now widely spread[4]. The medical images is supposed to get protection since it has probability to pass through unsecure network. Several watermarking techniques have been developed so that the digital medical images can be guaranteed in terms of its originality. In watermarking, the medical images becomes a protected object. Nevertheless, the medical images can actually be a medium of hiding secret data such as patient medical record. The data hiding is done by inserting data into image - usually called steganography in images. Because the medical images can influence the diagnose change, steganography will only be applied to non-interest region. Vector Quantization (VQ) is one of lossydata compression technique which is sufficiently prominent and frequently used. Generally, the VQ based steganography scheme still has limitation in terms of the data capacity which can be inserted. This research is aimed to make a Vector Quantization-based steganography scheme and graph coloring. The test result shows that the scheme can insert 28768 byte data which equals to 10077 characters for images area of 3696 pixels.

  4. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.]. PMID:27548458

  5. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.].

  6. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  7. Method for creating synthesized images using diazo color films

    NASA Astrophysics Data System (ADS)

    Kaczinski, R.

    1984-06-01

    The diazo film method involves the preparation of color synthesized images by the ultraviolet exposure of the diazo film without the use of additive projectors. Its relative simplicity makes possible the production of synthesized images at all centers making use of multizonal aerial and space photographs and an arbitrary combination of images obtained in different spectral channels for the purpose of obtaining the desired synthesized images. Diazo material makes it possible to obtain copies with a greater contrast than on the original materials. The successive steps in the processing of diazo materials are discussed. As a result of forming of different combinations it is possible to obtain from several tens to more than a hundred different variants of sets for interpretation.

  8. False-Color-Image Map of Quadrangle 3266, Ourzgan (519) and Moqur (520) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  9. False-Color-Image Map of Quadrangle 3464, Shahrak (411) and Kasi (412) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  10. False-Color-Image Map of Quadrangle 3164, Lashkargah (605) and Kandahar (606) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  11. False-Color-Image Map of Quadrangle 3564, Chahriaq (Joand) (405) and Gurziwan (406) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  12. False-Color-Image Map of Quadrangle 3568, Polekhomri (503) and Charikar (504) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  13. False-Color-Image Map of Quadrangle 3162, Chakhansur (603) and Kotalak (604) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  14. False-Color-Image Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  15. A perceptually tuned watermarking scheme for color images.

    PubMed

    Chou, Chun-Hsien; Liu, Kuo-Cheng

    2010-11-01

    Transparency and robustness are two conflicting requirements demanded by digital image watermarking for copyright protection and many other purposes. A feasible way to simultaneously satisfy the two conflicting requirements is to embed high-strength watermark signals in the host signals that can accommodate the distortion due to watermark insertion as part of perceptual redundancy. The search of distortion-tolerable host signals for watermark insertion and the determination of watermark strength are hence crucial to the realization of a transparent yet robust watermark. This paper presents a color image watermarking scheme that hides watermark signals in most distortion-tolerable signals within three color channels of the host image without resulting in perceivable distortion. The distortion-tolerable host signals or the signals that possess high perceptual redundancy are sought in the wavelet domain for watermark insertion. A visual model based on the CIEDE2000 color difference equation is used to measure the perceptual redundancy inherent in each wavelet coefficient of the host image. By means of quantization index modulation, binary watermark signals are embedded in qualified wavelet coefficients. To reinforce the robustness, the watermark signals are repeated and permuted before embedding, and restored by the majority-vote decision making process in watermark extraction. Original images are not required in watermark extraction. Only a small amount of information including locations of qualified coefficients and the data associated with coefficient quantization is needed for watermark extraction. Experimental results show that the embedded watermark is transparent and quite robust in face of various attacks such as cropping, low-pass filtering, scaling, media filtering, white-noise addition as well as the JPEG and JPEG2000 coding at high compression ratios. PMID:20529748

  16. A perceptually tuned watermarking scheme for color images.

    PubMed

    Chou, Chun-Hsien; Liu, Kuo-Cheng

    2010-11-01

    Transparency and robustness are two conflicting requirements demanded by digital image watermarking for copyright protection and many other purposes. A feasible way to simultaneously satisfy the two conflicting requirements is to embed high-strength watermark signals in the host signals that can accommodate the distortion due to watermark insertion as part of perceptual redundancy. The search of distortion-tolerable host signals for watermark insertion and the determination of watermark strength are hence crucial to the realization of a transparent yet robust watermark. This paper presents a color image watermarking scheme that hides watermark signals in most distortion-tolerable signals within three color channels of the host image without resulting in perceivable distortion. The distortion-tolerable host signals or the signals that possess high perceptual redundancy are sought in the wavelet domain for watermark insertion. A visual model based on the CIEDE2000 color difference equation is used to measure the perceptual redundancy inherent in each wavelet coefficient of the host image. By means of quantization index modulation, binary watermark signals are embedded in qualified wavelet coefficients. To reinforce the robustness, the watermark signals are repeated and permuted before embedding, and restored by the majority-vote decision making process in watermark extraction. Original images are not required in watermark extraction. Only a small amount of information including locations of qualified coefficients and the data associated with coefficient quantization is needed for watermark extraction. Experimental results show that the embedded watermark is transparent and quite robust in face of various attacks such as cropping, low-pass filtering, scaling, media filtering, white-noise addition as well as the JPEG and JPEG2000 coding at high compression ratios.

  17. A channel-based color fusion technique using multispectral images for night vision enhancement

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2011-09-01

    A fused image using multispectral images can increase the reliability of interpretation because it combines the complimentary information apparent in multispectral images. While a color image can be easily interpreted by human users (for visual analysis), and thus improves observer performance and reaction times. We propose a fast color fusion method, termed as channel-based color fusion, which is efficient for real time applications. Notice that the term of "color fusion" means combing multispectral images into a color-version image with the purpose of resembling natural scenes. On the other hand, false coloring technique usually has no intention of resembling natural scenery. The framework of channel-based color fusion is as follows, (1) prepare for color fusion by preprocessing, image registration and fusion; (2) form a color fusion image by properly assigning multispectral images to red, green, and blue channels; (3) fuse multispectral images (gray fusion) using a wavelet-based fusion algorithm; and (4) replace the value component of color fusion in HSV color space with the gray-fusion image, and finally transform back to RGB space. In night vision imaging, there may be two or several bands of images available, for example, visible (RGB), image intensified (II), near infrared (NIR), medium wave infrared (MWIR), long wave infrared (LWIR). The proposed channel-wise color fusions were tested with two-band (e.g., NIR + LWIR, II + LWIR, RGB + LWIR) or three-band (e.g., RGB + NIR + LWIR) multispectral images. Experimental results show that the colors in the fused images by the proposed method are vivid and comparable with that of the segmentation-based colorization. The processing speed of new method is much faster than any segmentation-based method.

  18. AIDA: An Adaptive Image Deconvolution Algorithm

    NASA Astrophysics Data System (ADS)

    Hom, Erik; Marchis, F.; Lee, T. K.; Haase, S.; Agard, D. A.; Sedat, J. W.

    2007-10-01

    We recently described an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging [Hom et al., J. Opt. Soc. Am. A 24, 1580 (2007)]. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. AIDA includes a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. Here, we present a gallery of results demonstrating the effectiveness of AIDA in processing planetary science images acquired using adaptive-optics systems. Offered as an open-source alternative to MISTRAL, AIDA is available for download and further development at: http://msg.ucsf.edu/AIDA. This work was supported in part by the W. M. Keck Observatory, the National Institutes of Health, NASA, the National Science Foundation Science and Technology Center for Adaptive Optics at UC-Santa Cruz, and the Howard Hughes Medical Institute.

  19. Extending the depth-of-field for microscopic imaging by means of multifocus color image fusion

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, R.; Toxqui-Quitl, C.; Padilla-Vivanco, A.; Ortega-Mendoza, G.

    2015-09-01

    In microscopy, the depth of field (DOF) is limited by the physical characteristics of imaging systems. Imaging a scene with the all the field of view in focus can be an impossible task to achieve. In this paper, metal samples are inspected on multiple focal planes by moving the microscope stage along the z - axis and for each z plane, an image is digitalized. Through digital image processing, an image with all the focused regions is generated from a set of multi focus images. The proposed fusion algorithm gives a single sharp image. The merger scheme is simple, fast and virtually free of artifacts or false color. Experimental fusion results are shown.

  20. Self-adaptive iris image acquisition system

    NASA Astrophysics Data System (ADS)

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu; Qiu, Xianchao

    2008-03-01

    Iris image acquisition is the fundamental step of the iris recognition, but capturing high-resolution iris images in real-time is very difficult. The most common systems have small capture volume and demand users to fully cooperate with machines, which has become the bottleneck of iris recognition's application. In this paper, we aim at building an active iris image acquiring system which is self-adaptive to users. Two low resolution cameras are co-located in a pan-tilt-unit (PTU), for face and iris image acquisition respectively. Once the face camera detects face region in real-time video, the system controls the PTU to move towards the eye region and automatically zooms, until the iris camera captures an clear iris image for recognition. Compared with other similar works, our contribution is that we use low-resolution cameras, which can transmit image data much faster and are much cheaper than the high-resolution cameras. In the system, we use Haar-like cascaded feature to detect faces and eyes, linear transformation to predict the iris camera's position, and simple heuristic PTU control method to track eyes. A prototype device has been established, and experiments show that our system can automatically capture high-quality iris image in the range of 0.6m×0.4m×0.4m in average 3 to 5 seconds.

  1. False-color composite image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color composite of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. The look angle of the radar is 30 degrees and the size of the image is approximately 20 kilometers by 50 kilometers (12 by 30 miles). Most of the dark areas in the image are the ice-covered lakes in the region. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of Highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake

  2. Three frequency false-color image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency, false color image of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. It was produced using data from the X-band, C-band and L-band radars that comprise the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). SIR-C/X-SAR acquired this image on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. Most of the dark blue areas in the image are the ice covered lakes. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are shown by light

  3. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  4. Edge-supressed color clustering for image thresholding

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Uijt de Haag, Maarten

    2000-03-01

    This paper discusses the development of an iterative algorithm for fully automatic (gross or fine) segmentation of color images. The basic idea here is to automate segmentation for on-line operations. This is needed for such critical applications as internet communication, video indexing, target tracking, visual guidance, remote control, and motion detection. The method is composed of an edge-suppressed clustering (learning) and principal component thresholding (classification) step. In the learning phase, image clusters are well formed in the (R,G,B) space by considering only the non-edge points. The unknown number (N) of mutually exclusive image segments is learned in an unsupervised operation mode developed based on the cluster fidelity measure and K-means algorithm. The classification phase is a correlation-based segmentation strategy that operates in the K-L transform domain using the Otsu thresholding principal. It is demonstrated experimentally that the method is effective and efficient for color images of natural scenes with irregular textures and objects of varying sizes and dimension.

  5. Glaucoma risk index: automated glaucoma detection from color fundus images.

    PubMed

    Bock, Rüdiger; Meier, Jörg; Nyúl, László G; Hornegger, Joachim; Michelson, Georg

    2010-06-01

    Glaucoma as a neurodegeneration of the optic nerve is one of the most common causes of blindness. Because revitalization of the degenerated nerve fibers of the optic nerve is impossible early detection of the disease is essential. This can be supported by a robust and automated mass-screening. We propose a novel automated glaucoma detection system that operates on inexpensive to acquire and widely used digital color fundus images. After a glaucoma specific preprocessing, different generic feature types are compressed by an appearance-based dimension reduction technique. Subsequently, a probabilistic two-stage classification scheme combines these features types to extract the novel Glaucoma Risk Index (GRI) that shows a reasonable glaucoma detection performance. On a sample set of 575 fundus images a classification accuracy of 80% has been achieved in a 5-fold cross-validation setup. The GRI gains a competitive area under ROC (AUC) of 88% compared to the established topography-based glaucoma probability score of scanning laser tomography with AUC of 87%. The proposed color fundus image-based GRI achieves a competitive and reliable detection performance on a low-priced modality by the statistical analysis of entire images of the optic nerve head.

  6. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, C.‰line; Gosselin, Bernard

    2005-01-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  7. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, Céline; Gosselin, Bernard

    2004-12-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  8. Butterfly wing coloration studied with a novel imaging scatterometer

    NASA Astrophysics Data System (ADS)

    Stavenga, Doekele

    2010-03-01

    Animal coloration functions for display or camouflage. Notably insects provide numerous examples of a rich variety of the applied optical mechanisms. For instance, many butterflies feature a distinct dichromatism, that is, the wing coloration of the male and the female differ substantially. The male Brimstone, Gonepteryx rhamni, has yellow wings that are strongly UV iridescent, but the female has white wings with low reflectance in the UV and a high reflectance in the visible wavelength range. In the Small White cabbage butterfly, Pieris rapae crucivora, the wing reflectance of the male is low in the UV and high at visible wavelengths, whereas the wing reflectance of the female is higher in the UV and lower in the visible. Pierid butterflies apply nanosized, strongly scattering beads to achieve their bright coloration. The male Pipevine Swallowtail butterfly, Battus philenor, has dorsal wings with scales functioning as thin film gratings that exhibit polarized iridescence; the dorsal wings of the female are matte black. The polarized iridescence probably functions in intraspecific, sexual signaling, as has been demonstrated in Heliconius butterflies. An example of camouflage is the Green Hairstreak butterfly, Callophrys rubi, where photonic crystal domains exist in the ventral wing scales, resulting in a matte green color that well matches the color of plant leaves. The spectral reflection and polarization characteristics of biological tissues can be rapidly and with unprecedented detail assessed with a novel imaging scatterometer-spectrophotometer, built around an elliptical mirror [1]. Examples of butterfly and damselfly wings, bird feathers, and beetle cuticle will be presented. [4pt] [1] D.G. Stavenga, H.L. Leertouwer, P. Pirih, M.F. Wehling, Optics Express 17, 193-202 (2009)

  9. Adaptive thresholding of digital subtraction angiography images

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Li, Heng; Peng, Weixue; Zhang, Tianxu

    2005-10-01

    In clinical practice, digital subtraction angiography (DSA) is a powerful technique for the visualization of blood vessels in the human body. Blood vessel segmentation is a main problem for 3D vascular reconstruction. In this paper, we propose a new adaptive thresholding method for the segmentation of DSA images. Each pixel of the DSA images is declared to be a vessel/background point with regard to a threshold and a few local characteristic limits depending on some information contained in the pixel neighborhood window. The size of the neighborhood window is set according to a priori knowledge of the diameter of vessels to make sure that each window contains the background definitely. Some experiments on cerebral DSA images are given, which show that our proposed method yields better results than global thresholding methods and some other local thresholding methods do.

  10. Digital Image Watermarking via Adaptive Logo Texturization.

    PubMed

    Andalibi, Mehran; Chandler, Damon M

    2015-12-01

    Grayscale logo watermarking is a quite well-developed area of digital image watermarking which seeks to embed into the host image another smaller logo image. The key advantage of such an approach is the ability to visually analyze the extracted logo for rapid visual authentication and other visual tasks. However, logos pose new challenges for invisible watermarking applications which need to keep the watermark imperceptible within the host image while simultaneously maintaining robustness to attacks. This paper presents an algorithm for invisible grayscale logo watermarking that operates via adaptive texturization of the logo. The central idea of our approach is to recast the watermarking task into a texture similarity task. We first separate the host image into sufficiently textured and poorly textured regions. Next, for textured regions, we transform the logo into a visually similar texture via the Arnold transform and one lossless rotation; whereas for poorly textured regions, we use only a lossless rotation. The iteration for the Arnold transform and the angle of lossless rotation are determined by a model of visual texture similarity. Finally, for each region, we embed the transformed logo into that region via a standard wavelet-based embedding scheme. We employ a multistep extraction stage, in which an affine parameter estimation is first performed to compensate for possible geometrical transformations. Testing with multiple logos on a database of host images and under a variety of attacks demonstrates that the proposed algorithm yields better overall performance than competing methods.

  11. Automated rice leaf disease detection using color image analysis

    NASA Astrophysics Data System (ADS)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  12. From printed color to image appearance: tool for advertising assessment

    NASA Astrophysics Data System (ADS)

    Bonanomi, Cristian; Marini, Daniele; Rizzi, Alessandro

    2012-07-01

    We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.

  13. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    SciTech Connect

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  14. Quantization of polyphenolic compounds in histological sections of grape berries by automated color image analysis

    NASA Astrophysics Data System (ADS)

    Clement, Alain; Vigouroux, Bertnand

    2003-04-01

    We present new results in applied color image analysis that put in evidence the significant influence of soil on localization and appearance of polyphenols in grapes. These results have been obtained with a new unsupervised classification algorithm founded on hierarchical analysis of color histograms. The process is automated thanks to a software platform we developed specifically for color image analysis and it's applications.

  15. Visual Tracking Based on the Adaptive Color Attention Tuned Sparse Generative Object Model.

    PubMed

    Tian, Chunna; Gao, Xinbo; Wei, Wei; Zheng, Hong

    2015-12-01

    This paper presents a new visual tracking framework based on an adaptive color attention tuned local sparse model. The histograms of sparse coefficients of all patches in an object are pooled together according to their spatial distribution. A particle filter methodology is used as the location model to predict candidates for object verification during tracking. Since color is an important visual clue to distinguish objects from background, we calculate the color similarity between objects in the previous frames and the candidates in current frame, which is adopted as color attention to tune the local sparse representation-based appearance similarity measurement between the object template and candidates. The color similarity can be calculated efficiently with hash coded color names, which helps the tracker find more reliable objects during tracking. We use a flexible local sparse coding of the object to evaluate the degeneration degree of the appearance model, based on which we build a model updating mechanism to alleviate drifting caused by temporal varying multi-factors. Experiments on 76 challenging benchmark color sequences and the evaluation under the object tracking benchmark protocol demonstrate the superiority of the proposed tracker over the state-of-the-art methods in accuracy. PMID:26390460

  16. Five-color fluorescent imaging in living tumor cells

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Yang, Jie; Chu, Jun; Luo, Qingming; Zhang, Zhihong

    2008-12-01

    The fluorescent probes based on fluorescent proteins (FP) have been widely used to investigate the molecules of interest in living cells. It is well-known that the molecular events in the living cells are very complicate and all of the cell activities are involved by multi-molecular interaction. With the development of novel fluorescent protein mutants and imaging technology, the molecular signal in living cells could be detected accurately. In this study, with the appropriate targeting signals, the fluorescent proteins were localized to plasma membrane (Rac1-mCerulean), Golgi membrane (EYFP-go), ER membrane (RFP2-er), mitochondrial membrane (RFP1-mt). Cultured Hela cells were cotransfected with these four plasmids, and 36 h later, labeled with Hoechst33258 which located in the nucleus of a living cell. Using a confocal microscopy, with 405 nm, 458 nm and 514 nm laser lines employed respectively, a five-color fluorescent image was obtained in which five subcellular structures were clearly shown in living cells. The technique of multi-color imaging in a single cell provides a powerful tool to simultaneously study the multi-molecular events in living cells.

  17. Structure of mouse spleen investigated by 7-color fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Tsurui, Hiromichi; Niwa, Shinichirou; Hirose, Sachiko; Okumura, Ko; Shirai, Toshikazu

    2001-07-01

    Multi-color fluorescence imaging of tissue samples has been an urgent requirement in current biology. As far as fluorescence signals should be isolated by optical bandpass filter-sets, rareness of the combination of chromophores with little spectral overlap has hampered to satisfy this demand. Additivity of signals in a fluorescence image accepts applying linear unmixing of superposed spectra based on singular value decomposition, hence complete separation of the fluorescence signals fairly overlapping each other. We have developed 7-color fluorescence imaging based on this principle and applied the method to the investigation of mouse spleen. Not only rough structural features in a spleen such as red pulp, marginal zone, and white pulp, but also fine structures of them, periarteriolar lymphocyte sheath (PALS), follicle, and germinal center were clearly pictured simultaneously. The distributions of subsets of dendritic cells (DC) and macrophages (M(phi) ) markers such as BM8, F4/80, MOMA2 and Mac3 around the marginal zone were imagined simultaneously. Their inhomogeneous expressions were clearly demonstrated. These results show the usefulness of the method in the study of the structure that consists of many kinds of cells and in the identification of cells characterized by multiple markers.

  18. Influence of Blackness on Visual Impression of Color Images

    NASA Astrophysics Data System (ADS)

    Eda, Tetsuya; Koike, Yoshiki; Matsushima, Sakurako; Ishikawa, Tomoharu; Ozaki, Koichi; Ayama, Miyoshi

    Two experiments, using color images of Japanese lacquer objects, investigated the relation between the strength of blackness and the visual and artistic impression of digital color images presented on a display. The first experiment determined the mean RGB values of black surface areas in the test stimuli where observers began to perceive the areas as “black”, and the mean RGB values where observers perceived the areas “really black”. Results indicate that to perceive a “really black” surface, RGB values should be lower than those of the original image in some pictures. The second experiment investigated, how, and to what degree the RGB values of black areas affect the visual impression of an artistic picture. Three factors, “high-quality axis”, “mysterious axis”, and “feeling of material axis”, were extracted by factor analysis. Results indicate that the Art students seem to be more sensitive in the evaluations along the “high-quality axis” and “mysterious axis” than the Engineering students are, while the opposite tendency is observed in the evaluation along the “feeling of material axis”.

  19. SRTM Radar Image with Color as Height: Kachchh, Gujarat, India

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image shows the area around the January 26, 2001, earthquake in western India, the deadliest in the country's history with some 20,000 fatalities. The epicenter of the magnitude 7.6 earthquake was just to the left of the center of the image. The Gulf of Kachchh (or Kutch) is the black area running from the lower left corner towards the center of the image. The city of Bhuj is in the yellow-toned area among the brown hills left of the image center and is the historical capital of the Kachchh region. Bhuj and many other towns and cities nearby were almost completely destroyed by the shaking of the earthquake. These hills reach up to 500 meters (1,500 feet) elevation. The city of Ahmedabad, capital of Gujarat state, is the radar-bright area next to the right side of the image. Several buildings in Ahmedabad were also destroyed by the earthquake. The dark blue areas around the center of the image and extending to the left side are low-lying salt flats called the Rann of Kachchh with the Little Rann just to the right of the image center. The bumpy area north of the Rann (green and yellow colors) is a large area of sand dunes in Pakistan. A branch of the Indus River used to flow through the area on the left side of this image, but it was diverted by a previous large earthquake that struck this area in 1819.

    The annotated version of the image includes a 'beachball' that shows the location and slip direction of the January 26, 2001, earthquake from the Harvard Quick CMT catalog: http://www.seismology.harvard.edu/CMTsearch.html. [figure removed for brevity, see original site]

    This image combines two types of data from the Shuttle Radar Topography Mission (SRTM). The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to brown and white at the highest elevations. This image is a mosaic of four SRTM swaths.

    This image

  20. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1997-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1 sec. The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have been using adaptive optics (AO) on a 4-m class telescope to obtain 0.1 sec resolution images solar system objects at far red and near infrared wavelengths (0.7-2.5 micron) which best discriminate their spectral signatures. Our efforts has been put into areas of research for which high angular resolution is essential, such as the mapping of Titan and of large asteroids, the dynamics and composition of Neptune stratospheric clouds, the infrared photometry of Pluto, Charon, and close satellites previously undetected from the ground.

  1. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1999-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1". The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have successfully used adaptive optics on a 4-m class telescope to obtain 0.1" resolution images of solar system objects in the far red and near infrared (0.7-2.5 microns), aE wavelengths which best discl"lmlnate their spectral signatures. Our efforts have been put into areas of research for which high angular resolution is essential.

  2. Los Angeles, California, Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationships of the dense urban development of Los Angeles and the natural contours of the land. The image includes the Pacific Ocean on the left, the flat Los Angeles Basin across the center, and the steep ranges of the Santa Monica and Verdugo mountains along the top. The two dark strips near the coast at lower left are the runways of Los Angeles International Airport. Downtown Los Angeles is the bright yellow and pink area at lower center. Pasadena, including the Rose Bowl, are seen half way down the right edge of the image. The communities of Glendale and Burbank, including the Burbank Airport, are seen at the center of the top edge of the image. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities such as Leangles.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between

  3. High-performance VGA-resolution digital color CMOS imager

    NASA Astrophysics Data System (ADS)

    Agwani, Suhail; Domer, Steve; Rubacha, Ray; Stanley, Scott

    1999-04-01

    This paper discusses the performance of a new VGA resolution color CMOS imager developed by Motorola on a 0.5micrometers /3.3V CMOS process. This fully integrated, high performance imager has on chip timing, control, and analog signal processing chain for digital imaging applications. The picture elements are based on 7.8micrometers active CMOS pixels that use pinned photodiodes for higher quantum efficiency and low noise performance. The image processing engine includes a bank of programmable gain amplifiers, line rate clamping for dark offset removal, real time auto white balancing, per column gain and offset calibration, and a 10 bit pipelined RSD analog to digital converter with a programmable input range. Post ADC signal processing includes features such as bad pixel replacement based on user defined thresholds levels, 10 to 8 bit companding and 5 tap FIR filtering. The sensor can be programmed via a standard I2C interface that runs on 3.3V clocks. Programmable features include variable frame rates using a constant frequency master clock, electronic exposure control, continuous or single frame capture, progressive or interlace scanning modes. Each pixel is individually addressable allowing region of interest imaging and image subsampling. The sensor operates with master clock frequencies of up to 13.5MHz resulting in 30FPS. A total programmable gain of 27dB is available. The sensor power dissipation is 400mW at full speed of operation. The low noise design yields a measured 'system on a chip' dynamic range of 50dB thus giving over 8 true bits of resolution. Extremely high conversion gain result in an excellent peak sensitivity of 22V/(mu) J/cm2 or 3.3V/lux-sec. This monolithic image capture and processing engine represent a compete imaging solution making it a true 'camera on a chip'. Yet in its operation it remains extremely easy to use requiring only one clock and a 3.3V power supply. Given the available features and performance levels, this sensor will be

  4. Three frequency false color image of Flevoland, the Netherlands

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency false color image of Flevoland, the Netherlands, centered at 52.4 degrees north latitude, 5.4 degrees east longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Shuttle Endeavour. The area shown covers an area approximately 25 kilometers by 28 kilometers. Flevoland, which fills the lower two-thirds of the image, is a very flat area that is made up of reclaimed land that is used for agriculture and forestry. At the top of the image, across the canal from Flevoland, is an older forest shown in red; the city of Harderwijk is shown in white on the shore of the canal. At this time of the year, the agricultural fields are bare soil, and they show up in this image in blue. The dark blue areas are water and the small dots in the canal are boats. The Jet Propulsion Laboratory alternative photo number is P-43941.

  5. Adaptive optics retinal imaging: emerging clinical applications.

    PubMed

    Godara, Pooja; Dubis, Adam M; Roorda, Austin; Duncan, Jacque L; Carroll, Joseph

    2010-12-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy and spectral domain-optical coherence tomography provide clinicians with remarkably clear pictures of the living retina. Although the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, the same optics induce significant aberrations that obviate cellular-resolution imaging in most cases. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. When applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, retinal pigment epithelium cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here, we review some of the advances that were made possible with AO imaging of the human retina and discuss applications and future prospects for clinical imaging.

  6. Quaternionic Local Ranking Binary Pattern: A Local Descriptor of Color Images.

    PubMed

    Lan, Rushi; Zhou, Yicong; Tang, Yuan Yan

    2016-02-01

    This paper proposes a local descriptor called quaternionic local ranking binary pattern (QLRBP) for color images. Different from traditional descriptors that are extracted from each color channel separately or from vector representations, QLRBP works on the quaternionic representation (QR) of the color image that encodes a color pixel using a quaternion. QLRBP is able to handle all color channels directly in the quaternionic domain and include their relations simultaneously. Applying a Clifford translation to QR of the color image, QLRBP uses a reference quaternion to rank QRs of two color pixels, and performs a local binary coding on the phase of the transformed result to generate local descriptors of the color image. Experiments demonstrate that the QLRBP outperforms several state-of-the-art methods. PMID:26672041

  7. Speckle image reconstruction of the adaptive optics solar images.

    PubMed

    Zhong, Libo; Tian, Yu; Rao, Changhui

    2014-11-17

    Speckle image reconstruction, in which the speckle transfer function (STF) is modeled as annular distribution according to the angular dependence of adaptive optics (AO) compensation and the individual STF in each annulus is obtained by the corresponding Fried parameter calculated from the traditional spectral ratio method, is used to restore the solar images corrected by AO system in this paper. The reconstructions of the solar images acquired by a 37-element AO system validate this method and the image quality is improved evidently. Moreover, we found the photometric accuracy of the reconstruction is field dependent due to the influence of AO correction. With the increase of angular separation of the object from the AO lockpoint, the relative improvement becomes approximately more and more effective and tends to identical in the regions far away the central field of view. The simulation results show this phenomenon is mainly due to the disparity of the calculated STF from the real AO STF with the angular dependence.

  8. Radar Image with Color as Height, Ancharn Kuy, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Ancharn Kuy, Cambodia, was taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR). The image depicts an area northwest of Angkor Wat. The radar has highlighted a number of circular village mounds in this region, many of which have a circular pattern of rice fields surrounding the slightly elevated site. Most of them have evidence of what seems to be pre-Angkor occupation, such as stone tools and potsherds. Most of them also have a group of five spirit posts, a pattern not found in other parts of Cambodia. The shape of the mound, the location in the midst of a ring of rice fields, the stone tools and the current practice of spirit veneration have revealed themselves through a unique 'marriage' of radar imaging, archaeological investigation, and anthropology.

    Ancharn Kuy is a small village adjacent to the road, with just this combination of features. The region gets slowly higher in elevation, something seen in the shift of color from yellow to blue as you move to the top of the image.

    The small dark rectangles are typical of the smaller water control devices employed in this area. While many of these in the center of Angkor are linked to temples of the 9th to 14th Century A.D., we cannot be sure of the construction date of these small village tanks. They may pre-date the temple complex, or they may have just been dug ten years ago!

    The image dimensions are approximately 4.75 by 4.3 kilometers (3 by 2.7 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color; that is going from blue to red to yellow to green and back to blue again; corresponds to 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif

  9. Optical color-image encryption in the diffractive-imaging scheme

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Pan, Qunna; Gong, Qiong

    2016-02-01

    By introducing the theta modulation technique into the diffractive-imaging-based optical scheme, we propose a novel approach for color image encryption. For encryption, a color image is divided into three channels, i.e., red, green and blue, and thereafter these components are appended by redundant data before being sent to the encryption scheme. The carefully designed optical setup, which comprises of three 4f optical architectures and a diffractive-imaging-based optical scheme, could encode the three plaintexts into a single noise-like intensity pattern. For the decryption, an iterative phase retrieval algorithm, together with a filter operation, is applied to extract the primary color images from the diffraction intensity map. Compared with previous methods, our proposal has successfully encrypted a color rather than grayscale image into a single intensity pattern, as a result of which the capacity and practicability have been remarkably enhanced. In addition, the performance and the security of it are also investigated. The validity as well as feasibility of the proposed method is supported by numerical simulations.

  10. Functional magnetic resonance imaging adaptation reveals a noncategorical representation of hue in early visual cortex.

    PubMed

    Persichetti, Andrew S; Thompson-Schill, Sharon L; Butt, Omar H; Brainard, David H; Aguirre, Geoffrey K

    2015-01-01

    Color names divide the fine-grained gamut of color percepts into discrete categories. A categorical transition must occur somewhere between the initial encoding of the continuous spectrum of light by the cones and the verbal report of the name of a color stimulus. Here, we used a functional magnetic resonance imaging (fMRI) adaptation experiment to examine the representation of hue in the early visual cortex. Our stimuli varied in hue between blue and green. We found in the early visual areas (V1, V2/3, and hV4) a smoothly increasing recovery from adaptation with increasing hue distance between adjacent stimuli during both passive viewing (Experiment 1) and active categorization (Experiment 2). We examined the form of the adaptation effect and found no evidence that a categorical representation mediates the release from adaptation for stimuli that cross the blue-green color boundary. Examination of the direct effect of stimulus hue on the fMRI response did, however, reveal an enhanced response to stimuli near the blue-green category border. This was largest in hV4 and when subjects were engaged in active categorization of the stimulus hue. In contrast with a recent report from another laboratory (Bird, Berens, Horner, & Franklin, 2014), we found no evidence for a categorical representation of color in the middle frontal gyrus. A post hoc whole-brain analysis, however, revealed several regions in the frontal cortex with a categorical effect in the adaptation response. Overall, our results support the idea that the representation of color in the early visual cortex is primarily fine grained and does not reflect color categories. PMID:26024465

  11. Estimation of spectral transmittance curves from RGB images in color digital holographic microscopy using speckle illuminations

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Tokuno, Yuta; Aizu, Yoshihisa

    2016-06-01

    We investigate the estimation of spectral transmittance curves in color digital holographic microscopy using speckle illuminations. In color digital holography, it has the disadvantage in that the color-composite image gives poor color information due to the use of lasers with the two or three wavelengths. To overcome this disadvantage, the Wiener estimation method and an averaging process using multiple holograms are applied to color digital holographic microscopy. Estimated spectral transmittance and color-composite images are shown to indicate the usefulness of the proposed method.

  12. Enhancement dark channel algorithm of color fog image based on the local segmentation

    NASA Astrophysics Data System (ADS)

    Yun, Lijun; Gao, Yin; Shi, Jun-sheng; Xu, Ling-zhang

    2015-04-01

    The classical dark channel theory algorithm has yielded good results in the processing of single fog image, but in some larger contrast regions, it appears image hue, brightness and saturation distortion problems to a certain degree, and also produces halo phenomenon. In the view of the above situation, through a lot of experiments, this paper has found some factors causing the halo phenomenon. The enhancement dark channel algorithm of color fog image based on the local segmentation is proposed. On the basis of the dark channel theory, first of all, the classic dark channel theory of mathematical model is modified, which is mainly to correct the brightness and saturation of image. Then, according to the local adaptive segmentation theory, it process the block of image, and overlap the local image. On the basis of the statistical rules, it obtains each pixel value from the segmentation processing, so as to obtain the local image. At last, using the dark channel theory, it achieves the enhanced fog image. Through the subjective observation and objective evaluation, the algorithm is better than the classic dark channel algorithm in the overall and details.

  13. A POCS-based restoration algorithm for restoring halftoned color-quantized images.

    PubMed

    Fung, Yik-Hing; Chan, Yuk-Hee

    2006-07-01

    This paper studies the restoration of images which are color-quantized with error diffusion. Though there are many reported algorithms proposed for restoring noisy blurred color images and inverse halftoning, restoration of color-quantized images is rarely addressed in the literature especially when the images are color-quantized with halftoning. Direct application of existing restoration techniques are generally inadequate to deal with this problem. In this paper, a restoration algorithm based on projection onto convex sets is proposed. This algorithm makes use of the available color palette and the mechanism of a halftoning process to derive useful a priori information for restoration. Simulation results showed that it could improve the quality of a halftoned color-quantized image remarkably in terms of both SNR and CIELAB color difference metric.

  14. Automatic sputum color image segmentation for tuberculosis diagnosis

    NASA Astrophysics Data System (ADS)

    Forero-Vargas, Manuel G.; Sierra-Ballen, Eduard L.; Alvarez-Borrego, Josue; Pech-Pacheco, Jose L.; Cristobal-Perez, Gabriel; Alcala, Luis; Desco, Manuel

    2001-11-01

    Tuberculosis (TB) and other mycobacteriosis are serious illnesses which control is mainly based on presumptive diagnosis. Besides of clinical suspicion, the diagnosis of mycobacteriosis must be done through genus specific smears of clinical specimens. However, these techniques lack of sensitivity and consequently clinicians must wait culture results as much as two months. Computer analysis of digital images from these smears could improve sensitivity of the test and, moreover, decrease workload of the micobacteriologist. Bacteria segmentation of particular species entails a complex process. Bacteria shape is not enough as a discriminant feature, because there are many species that share the same shape. Therefore the segmentation procedure requires to be improved using the color image information. In this paper we present two segmentation procedures based on fuzzy rules and phase-only correlation techniques respectively that will provide the basis of a future automatic particle' screening.

  15. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  16. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  17. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  18. Honolulu, Hawaii Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the city of Honolulu, Hawaii and adjacent areas on the island of Oahu. Honolulu lies on the south shore of the island, right of center of the image. Just below the center is Pearl Harbor, marked by several inlets and bays. Runways of the airport can be seen to the right of Pearl Harbor. Diamond Head, an extinct volcanic crater, is a blue circle along the coast right of center. The Koolau mountain range runs through the center of the image. The steep cliffs on the north side of the range are thought to be remnants of massive landslides that ripped apart the volcanic mountains that built the island thousands of years ago. On the north shore of the island are the Mokapu Peninsula and Kaneohe Bay. High resolution topographic data allow ecologists and planners to assess the effects of urban development on the sensitive ecosystems in tropical regions.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA

  19. Radar image with color as height, Bahia State, Brazil

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This radar image is the first to show the full 240-kilometer-wide (150 mile)swath collected by the Shuttle Radar Topography Mission (SRTM). The area shown is in the state of Bahia in Brazil. The semi-circular mountains along the leftside of the image are the Serra Da Jacobin, which rise to 1100 meters (3600 feet) above sea level. The total relief shown is approximately 800 meters (2600 feet). The top part of the image is the Sertao, a semi-arid region, that is subject to severe droughts during El Nino events. A small portion of the San Francisco River, the longest river (1609 kilometers or 1000 miles) entirely within Brazil, cuts across the upper right corner of the image. This river is a major source of water for irrigation and hydroelectric power. Mapping such regions will allow scientists to better understand the relationships between flooding cycles, drought and human influences on ecosystems.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. The three dark vertical stripes show the boundaries where four segments of the swath are merged to form the full scanned swath. These will be removed in later processing. Colors range from green at the lowest elevations to reddish at the highest elevations.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space

  20. San Gabriel Mountains, California, Radar image, color as height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationship of the urban area of Pasadena, California to the natural contours of the land. The image includes the alluvial plain on which Pasadena and the Jet Propulsion Laboratory sit, and the steep range of the San Gabriel Mountains. The mountain front and the arcuate valley running from upper left to the lower right are active fault zones, along which the mountains are rising. The chaparral-covered slopes above Pasadena are also a prime area for wildfires and mudslides. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities along the front of active mountain ranges.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to white at the highest elevations. This image contains about 2300 meters (7500 feet) of total relief. White speckles on the face of some of the mountains are holes in the data caused by steep terrain. These will be filled using coverage from an intersecting pass.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  1. Context cue-dependent saccadic adaptation in rhesus macaques cannot be elicited using color.

    PubMed

    Cecala, Aaron L; Smalianchuk, Ivan; Khanna, Sanjeev B; Smith, Matthew A; Gandhi, Neeraj J

    2015-07-01

    When the head does not move, rapid movements of the eyes called saccades are used to redirect the line of sight. Saccades are defined by a series of metrical and kinematic (evolution of a movement as a function of time) relationships. For example, the amplitude of a saccade made from one visual target to another is roughly 90% of the distance between the initial fixation point (T0) and the peripheral target (T1). However, this stereotypical relationship between saccade amplitude and initial retinal error (T1-T0) may be altered, either increased or decreased, by surreptitiously displacing a visual target during an ongoing saccade. This form of motor learning (called saccadic adaptation) has been described in both humans and monkeys. Recent experiments in humans and monkeys have suggested that internal (proprioceptive) and external (target shape, color, and/or motion) cues may be used to produce context-dependent adaptation. We tested the hypothesis that an external contextual cue (target color) could be used to evoke differential gain (actual saccade/initial retinal error) states in rhesus monkeys. We did not observe differential gain states correlated with target color regardless of whether targets were displaced along the same vector as the primary saccade or perpendicular to it. Furthermore, this observation held true regardless of whether adaptation trials using various colors and intrasaccade target displacements were randomly intermixed or presented in short or long blocks of trials. These results are consistent with hypotheses that state that color cannot be used as a contextual cue and are interpreted in light of previous studies of saccadic adaptation in both humans and monkeys. PMID:25995353

  2. Context cue-dependent saccadic adaptation in rhesus macaques cannot be elicited using color.

    PubMed

    Cecala, Aaron L; Smalianchuk, Ivan; Khanna, Sanjeev B; Smith, Matthew A; Gandhi, Neeraj J

    2015-07-01

    When the head does not move, rapid movements of the eyes called saccades are used to redirect the line of sight. Saccades are defined by a series of metrical and kinematic (evolution of a movement as a function of time) relationships. For example, the amplitude of a saccade made from one visual target to another is roughly 90% of the distance between the initial fixation point (T0) and the peripheral target (T1). However, this stereotypical relationship between saccade amplitude and initial retinal error (T1-T0) may be altered, either increased or decreased, by surreptitiously displacing a visual target during an ongoing saccade. This form of motor learning (called saccadic adaptation) has been described in both humans and monkeys. Recent experiments in humans and monkeys have suggested that internal (proprioceptive) and external (target shape, color, and/or motion) cues may be used to produce context-dependent adaptation. We tested the hypothesis that an external contextual cue (target color) could be used to evoke differential gain (actual saccade/initial retinal error) states in rhesus monkeys. We did not observe differential gain states correlated with target color regardless of whether targets were displaced along the same vector as the primary saccade or perpendicular to it. Furthermore, this observation held true regardless of whether adaptation trials using various colors and intrasaccade target displacements were randomly intermixed or presented in short or long blocks of trials. These results are consistent with hypotheses that state that color cannot be used as a contextual cue and are interpreted in light of previous studies of saccadic adaptation in both humans and monkeys.

  3. Color measurement of tea leaves at different drying periods using hyperspectral imaging technique.

    PubMed

    Xie, Chuanqi; Li, Xiaoli; Shao, Yongni; He, Yong

    2014-01-01

    This study investigated the feasibility of using hyperspectral imaging technique for nondestructive measurement of color components (ΔL*, Δa* and Δb*) and classify tea leaves during different drying periods. Hyperspectral images of tea leaves at five drying periods were acquired in the spectral region of 380-1030 nm. The three color features were measured by the colorimeter. Different preprocessing algorithms were applied to select the best one in accordance with the prediction results of partial least squares regression (PLSR) models. Competitive adaptive reweighted sampling (CARS) and successive projections algorithm (SPA) were used to identify the effective wavelengths, respectively. Different models (least squares-support vector machine [LS-SVM], PLSR, principal components regression [PCR] and multiple linear regression [MLR]) were established to predict the three color components, respectively. SPA-LS-SVM model performed excellently with the correlation coefficient (rp) of 0.929 for ΔL*, 0.849 for Δa*and 0.917 for Δb*, respectively. LS-SVM model was built for the classification of different tea leaves. The correct classification rates (CCRs) ranged from 89.29% to 100% in the calibration set and from 71.43% to 100% in the prediction set, respectively. The total classification results were 96.43% in the calibration set and 85.71% in the prediction set. The result showed that hyperspectral imaging technique could be used as an objective and nondestructive method to determine color features and classify tea leaves at different drying periods.

  4. Color

    ERIC Educational Resources Information Center

    Bowman, Bruce

    1975-01-01

    The color wheel, because it is an excellent way to teach color theory has become somewhat of a traditional assignment in most basic design courses. Article described a way to change this situation by re-designing and improving upon the basic color wheel. (Author/RK)

  5. Color image encryption based on gyrator transform and Arnold transform

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Gao, Bo

    2013-06-01

    A color image encryption scheme using gyrator transform and Arnold transform is proposed, which has two security levels. In the first level, the color image is separated into three components: red, green and blue, which are normalized and scrambled using the Arnold transform. The green component is combined with the first random phase mask and transformed to an interim using the gyrator transform. The first random phase mask is generated with the sum of the blue component and a logistic map. Similarly, the red component is combined with the second random phase mask and transformed to three-channel-related data. The second random phase mask is generated with the sum of the phase of the interim and an asymmetrical tent map. In the second level, the three-channel-related data are scrambled again and combined with the third random phase mask generated with the sum of the previous chaotic maps, and then encrypted into a gray scale ciphertext. The encryption result has stationary white noise distribution and camouflage property to some extent. In the process of encryption and decryption, the rotation angle of gyrator transform, the iterative numbers of Arnold transform, the parameters of the chaotic map and generated accompanied phase function serve as encryption keys, and hence enhance the security of the system. Simulation results and security analysis are presented to confirm the security, validity and feasibility of the proposed scheme.

  6. Retinal imaging using adaptive optics technology☆

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  7. Retinal imaging using adaptive optics technology.

    PubMed

    Kozak, Igor

    2014-04-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  8. Dual-color 3D superresolution microscopy by combined spectral-demixing and biplane imaging.

    PubMed

    Winterflood, Christian M; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-07-01

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color.

  9. Extreme Adaptive Optics Planet Imager: XAOPI

    SciTech Connect

    Macintosh, B A; Graham, J; Poyneer, L; Sommargren, G; Wilhelmsen, J; Gavel, D; Jones, S; Kalas, P; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Patience, J; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Troy, M; Wallace, K

    2003-09-17

    Ground based adaptive optics is a potentially powerful technique for direct imaging detection of extrasolar planets. Turbulence in the Earth's atmosphere imposes some fundamental limits, but the large size of ground-based telescopes compared to spacecraft can work to mitigate this. We are carrying out a design study for a dedicated ultra-high-contrast system, the eXtreme Adaptive Optics Planet Imager (XAOPI), which could be deployed on an 8-10m telescope in 2007. With a 4096-actuator MEMS deformable mirror it should achieve Strehl >0.9 in the near-IR. Using an innovative spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused by static errors. We predict that it will achieve contrast levels of 10{sup 7}-10{sup 8} at angular separations of 0.2-0.8 inches around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. We are constructing a high-contrast AO testbed to verify key concepts of our system, and present preliminary results here, showing an RMS wavefront error of <1.3 nm with a flat mirror.

  10. Separate channels for processing form, texture, and color: evidence from FMRI adaptation and visual object agnosia.

    PubMed

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-10-01

    Previous neuroimaging research suggests that although object shape is analyzed in the lateral occipital cortex, surface properties of objects, such as color and texture, are dealt with in more medial areas, close to the collateral sulcus (CoS). The present study sought to determine whether there is a single medial region concerned with surface properties in general or whether instead there are multiple foci independently extracting different surface properties. We used stimuli varying in their shape, texture, or color, and tested healthy participants and 2 object-agnosic patients, in both a discrimination task and a functional MR adaptation paradigm. We found a double dissociation between medial and lateral occipitotemporal cortices in processing surface (texture or color) versus geometric (shape) properties, respectively. In Experiment 2, we found that the medial occipitotemporal cortex houses separate foci for color (within anterior CoS and lingual gyrus) and texture (caudally within posterior CoS). In addition, we found that areas selective for shape, texture, and color individually were quite distinct from those that respond to all of these features together (shape and texture and color). These latter areas appear to correspond to those associated with the perception of complex stimuli such as faces and places.

  11. Comparison of Color Model in Cotton Image Under Conditions of Natural Light

    NASA Astrophysics Data System (ADS)

    Zhang, J. H.; Kong, F. T.; Wu, J. Z.; Wang, S. W.; Liu, J. J.; Zhao, P.

    Although the color images contain a large amount of information reflecting the species characteristics, different color models also get different information. The selection of color models is the key to separating crops from background effectively and rapidly. Taking the cotton images collected under natural light as the object, we convert the color components of RGB color model, HSL color model and YIQ color model respectively. Then, we use subjective evaluation and objective evaluation methods, evaluating the 9 color components of conversion. It is concluded that the Q component of the soil, straw and plastic film region gray values remain the same without larger fluctuation when using subjective evaluation method. In the objective evaluation, we use the variance method, average gradient method, gray prediction objective evaluation error statistics method and information entropy method respectively to find the minimum numerical of Q color component suitable for background segmentation.

  12. Landsat ETM+ False-Color Image Mosaics of Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2007-01-01

    In 2005, the U.S. Agency for International Development and the U.S. Trade and Development Agency contracted with the U.S. Geological Survey to perform assessments of the natural resources within Afghanistan. The assessments concentrate on the resources that are related to the economic development of that country. Therefore, assessments were initiated in oil and gas, coal, mineral resources, water resources, and earthquake hazards. All of these assessments require geologic, structural, and topographic information throughout the country at a finer scale and better accuracy than that provided by the existing maps, which were published in the 1970's by the Russians and Germans. The very rugged terrain in Afghanistan, the large scale of these assessments, and the terrorist threat in Afghanistan indicated that the best approach to provide the preliminary assessments was to use remotely sensed, satellite image data, although this may also apply to subsequent phases of the assessments. Therefore, the first step in the assessment process was to produce satellite image mosaics of Afghanistan that would be useful for these assessments. This report discusses the production of the Landsat false-color image database produced for these assessments, which was produced from the calibrated Landsat ETM+ image mosaics described by Davis (2006).

  13. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2016-01-01

    Future direct-imaging exoplanet missions such as WFIRST will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These "cold" exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  14. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2015-08-01

    Future direct-imaging exoplanet missions such as WFIRST/AFTA, Exo-C, and Exo-S will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These “cold” exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  15. Radar Image with Color as Height, Old Khmer Road, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image shows the Old Khmer Road (Inrdratataka-Bakheng causeway) in Cambodia extending from the 9th Century A.D. capitol city of Hariharalaya in the lower right portion of the image to the later 10th Century AD capital of Yasodharapura. This was located in the vicinity of Phnom Bakheng (not shown in image). The Old Road is believed to be more than 1000 years old. Its precise role and destination within the 'new' city at Angkor is still being studied by archeologists. But wherever it ended, it not only offered an immense processional way for the King to move between old and new capitols, it also linked the two areas, widening the territorial base of the Khmer King. Finally, in the past and today, the Old Road managed the waters of the floodplain. It acted as a long barrage or dam for not only the natural streams of the area but also for the changes brought to the local hydrology by Khmer population growth.

    The image was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Image brightness is from the P-band (68 cm wavelength) radar backscatter, which is a measure of how much energy the surface reflects back towards the radar. Color is used to represent elevation contours. One cycle of color represents 20 m of elevation change, that is going from blue to red to yellow to green and back to blue again corresponds to 20 m of elevation change. Image dimensions are approximately 3.4 km by 3.5 km with a pixel spacing of 5 m. North is at top.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  16. Single camera imaging system for color and near-infrared fluorescence image guided surgery

    PubMed Central

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-01-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 μM can be detected when the excitation irradiance is 3.92 mW/cm2 at an exposure time of 10 ms. PMID:25136502

  17. Adaptive Optics Imaging of Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Herman, Miranda; Waaler, Mason; Patience, Jennifer; Ward-Duong, Kimberly; Rajan, Abhijith; McCarthy, Don; Kulesa, Craig; Wilson, Paul A.

    2016-01-01

    With the Arizona Infrared imager and Echelle Spectrograph (ARIES) instrument on the 6.5m MMT telescope, we obtained high angular resolution adaptive optics images of 12 exoplanet host stars. The targets are all systems with exoplanets in extremely close orbits such that the planets transit the host stars and cause regular brightness changes in the stars. The transit depth of the light curve is used to infer the radius and, in combination with radial velocity measurements, the density of the planet, but the results can be biased if the light from the host star is the combined light of a pair of stars in a binary system or a chance alignment of two stars. Given the high frequency of binary star systems and the increasing number of transit exoplanet discoveries from Kepler, K2, and anticipated discoveries with the Transiting Exoplanet Survey Satellite (TESS), this is a crucial point to consider when interpreting exoplanet properties. Companions were identified around five of the twelve targets at separations close enough that the brightness measurements of these host stars are in fact the combined brightness of two stars. Images of the resolved stellar systems and reanalysis of the exoplanet properties accounting for the presence of two stars are presented.

  18. New Orleans Topography, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The city of New Orleans, situated on the southern shore of Lake Pontchartrain, is shown in this radar image from the Shuttle Radar Topography Mission (SRTM). In this image bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the SRTM mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is near the center of this scene, between the lake and the Mississippi River. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest overwater highway bridge. Major portions of the city of New Orleans are actually below sea level, and although it is protected by levees and sea walls that are designed to protect against storm surges of 18 to 20 feet, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface

  19. Filter-free image sensor pixels comprising silicon nanowires with selective color absorption.

    PubMed

    Park, Hyunsung; Dan, Yaping; Seo, Kwanyong; Yu, Young J; Duane, Peter K; Wober, Munib; Crozier, Kenneth B

    2014-01-01

    The organic dye filters of conventional color image sensors achieve the red/green/blue response needed for color imaging, but have disadvantages related to durability, low absorption coefficient, and fabrication complexity. Here, we report a new paradigm for color imaging based on all-silicon nanowire devices and no filters. We fabricate pixels consisting of vertical silicon nanowires with integrated photodetectors, demonstrate that their spectral sensitivities are governed by nanowire radius, and perform color imaging. Our approach is conceptually different from filter-based methods, as absorbed light is converted to photocurrent, ultimately presenting the opportunity for very high photon efficiency.

  20. Color image super-resolution reconstruction based on POCS with edge preserving

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Liang, Ying; Liang, Yu

    2015-10-01

    A color image super-resolution (SR) reconstruction based on an improved Projection onto Convex Sets (POCS) in YCbCr space is proposed. Compared with other methods, the POCS method is more intuitive and generally simple to implement. However, conventional POCS algorithm is strict to the accuracy of movement estimation and it is not conducive to the resumption of the edge and details of images. Addressed to these two problems, we on one hand improve the LOG operator to detect edges with the directions of +/-0°, +/-45°, +/-90°, +/-135° in order to inhibit the edge degradation. Then, by using the edge information, we proposed a self-adaptive edge-directed interpolation and a modified adaptive direction PSF to construct a reference image as well as to reduce the edge oscillation when revising the reference respectively. On the other hand, instead of block-matching, the Speeded up Robust Feature (SURF) matching algorithm, which can accurately extract the feature points with invariant to affine transform, rotation, scale, illumination changes, are utilized to improve the robustness and real-time in motion estimation. The performance of the proposed approach has been tested on several images and the obtained results demonstrate that it is competitive or rather better in quality and efficiency in comparison with the traditional POCS.

  1. [Color processing of ultrasonographic images in extracorporeal lithotripsy].

    PubMed

    Lardennois, B; Ziade, A; Walter, K

    1991-02-01

    A number of technical difficulties are encountered in the ultrasonographic detection of renal stones which unfortunately limit its performance. The margin of error of firing in extracorporeal shock-wave lithotripsy (ESWL) must be reduced to a minimum. The role of the ultrasonographic monitoring during lithotripsy is also essential: continuous control of the focussing of the short-wave beamand assessment if the quality of fragmentation. The authors propose to improve ultrasonographic imaging in ESWL by means of intraoperative colour processing of the stone. Each shot must be directed to its target with an economy of vision avoiding excessive fatigue. The principle of the technique consists of digitalization of the ultrasound video images using a Macintosh Mac 2 computer. The Graphis Paint II program is interfaced directly with the Quick Capture card and recovers the images on its work surface in real time. The program is then able to attribute to each of these 256 shades of grey any one of the 16.6 million colours of the Macintosh universe with specific intensity and saturation. During fragmentation, using the principle of a palette, the stone changes colour from green to red indicating complete fragmentation. A Color Space card converts the digital image obtained into a video analogue source which is visualized on the monitor. It can be superimposed and/or juxtaposed with the source image by means of a multi-standard mixing table. Colour processing of ultrasonographic images in extracoporeal shockwave lithotripsy allows better visualization of the stones and better follow-up of fragmentation and allows the shockwave treatment to be stopped earlier. It increases the stone-free performance at 6 months. This configuration will eventually be able to integrate into the ultrasound apparatus itself.

  2. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  3. Vicarious calibration of the Geostationary Ocean Color Imager.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram; Oh, Im Sang

    2015-09-01

    Measurements of ocean color from Geostationary Ocean Color Imager (GOCI) with a moderate spatial resolution and a high temporal frequency demonstrate high value for a number of oceanographic applications. This study aims to propose and evaluate the calibration of GOCI as needed to achieve the level of radiometric accuracy desired for ocean color studies. Previous studies reported that the GOCI retrievals of normalized water-leaving radiances (nLw) are biased high for all visible bands due to the lack of vicarious calibration. The vicarious calibration approach described here relies on the assumed constant aerosol characteristics over the open-ocean sites to accurately estimate atmospheric radiances for the two near-infrared (NIR) bands. The vicarious calibration of visible bands is performed using in situ nLw measurements and the satellite-estimated atmospheric radiance using two NIR bands over the case-1 waters. Prior to this analysis, the in situ nLw spectra in the NIR are corrected by the spectrum optimization technique based on the NIR similarity spectrum assumption. The vicarious calibration gain factors derived for all GOCI bands (except 865nm) significantly improve agreement in retrieved remote-sensing reflectance (Rrs) relative to in situ measurements. These gain factors are independent of angular geometry and possible temporal variability. To further increase the confidence in the calibration gain factors, a large data set from shipboard measurements and AERONET-OC is used in the validation process. It is shown that the absolute percentage difference of the atmospheric correction results from the vicariously calibrated GOCI system is reduced by ~6.8%.

  4. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  5. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging.

    PubMed

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-12-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  6. Single underwater image enhancement based on color cast removal and visibility restoration

    NASA Astrophysics Data System (ADS)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  7. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  8. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    PubMed

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  9. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    PubMed

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features. PMID:21537852

  10. Visualization of multivariate image data using image fusion and perceptually optimized color scales based on sRGB

    NASA Astrophysics Data System (ADS)

    Saalbach, Axel; Twellmann, Thorsten; Nattkemper, Tim; White, Mark; Khazen, Michael; Leach, Martin O.

    2004-05-01

    Due to the rapid progress in medical imaging technology, analysis of multivariate image data is receiving increased interest. However, their visual exploration is a challenging task since it requires the integration of information from many different sources which usually cannot be perceived at once by an observer. Image fusion techniques are commonly used to obtain information from multivariate image data, while psychophysical aspects of data visualization are usually not considered. Visualization is typically achieved by means of device derived color scales. With respect to psychophysical aspects of visualization, more sophisticated color mapping techniques based on device independent (and perceptually uniform) color spaces like CIELUV have been proposed. Nevertheless, the benefit of these techniques is limited by the fact that they require complex color space transformations to account for device characteristics and viewing conditions. In this paper we present a new framework for the visualization of multivariate image data using image fusion and color mapping techniques. In order to overcome problems of consistent image presentations and color space transformations, we propose perceptually optimized color scales based on CIELUV in combination with sRGB (IEC 61966-2-1) color specification. In contrast to color definitions based purely on CIELUV, sRGB data can be used directly under reasonable conditions, without complex transformations and additional information. In the experimental section we demonstrate the advantages of our approach in an application of these techniques to the visualization of DCE-MRI images from breast cancer research.

  11. An investigation on the intra-sample distribution of cotton color by using image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  12. A two color pupil imaging method to detect stellar oscillations

    NASA Astrophysics Data System (ADS)

    Cacciani, A.; Dolci, M.; Jefferies, S. M.; Finsterle, W.; Fossat, E.; Sigismondi, C.; Cesario, L.; Bertello, L.; Varadi, F.

    Observations of stellar intensity oscillations from the ground are strongly affected by intensity fluctuations caused by the atmosphere (scintillation). However, by using a differential observational method that images the pupil of the telescope in two colors at the same time on a single CCD, we can partially compensate for this source of atmospheric noise (which is color dependant) as well as other problems, such as guiding and saturation. Moreover, by placing instruments at different locations (eg. Dome C and South Pole) we can further reduce the atmospheric noise contribution by using cross-spectral methods, such as Random Lag Singular Cross-Spectrum Analysis (RLSCA). (We also decrease the likelihood of gaps in the data string due to bad weather). The RLSCA method is well suited for extracting common oscillatory components from two or more observations, including their relative phases. We have evaluated the performance of our method using real data from SOHO. We find that our differential algorithm can recover the absolute amplitudes of the solar intensity oscillations with an efficiency of 70%. We are currently carrying out tests using a number of telescopes, including Big Bear, Mt. Wilson, Teramo and Milano, while waiting for the South Pole and Dome C sites to become available.

  13. 2-COLOR Pupil Imaging Method to Detect Stellar Oscillations

    NASA Astrophysics Data System (ADS)

    Costantino, Sigismondi; Alessandro, Cacciani; Mauro, Dolci; Stuart, Jeffries; Eric, Fossat; Ludovico, Cesario; Paolo, Rapex; Luca, Bertello; Ferenc, Varadi; Wolfgang, Finsterle

    Stellar intensity oscillations from the ground are strongly affected by atmospheric noise. For solar-type stars even Antarctic scintillation noise is still overwhelming. We proposed and tested a differential method that images on the same CCD detector two-color pupils of the telescope in order to compensate for intensity sky fluctuations guiding and saturation problems. SOHO data reveal that our method has an efficiency of 70% respect to the absolute amplitude variations. Using two instruments at Dome C and South Pole we can further minimize atmospheric color noise with cross-spectrum methods. This way we also decrease the likelihood of gaps in the data string due to bad weather. Observationally while waiting for the South Pole/Dome-C sites we are carrying on tests from available telescopes and Big Bear Mt. Wilson Teramo Milano. On the data analysis side we use the Random Lag Singular Cross-Spectrum Analysis which eliminates noise from the observed signal better than traditional Fourier transform. This method is also well-suited for extracting common oscillatory components from two or more observations including their relative phases as we are planning to do

  14. Comparison of color image segmentations for lane following

    NASA Astrophysics Data System (ADS)

    Sandt, Frederic; Aubert, Didier

    1993-05-01

    For ten years, unstructured road following has been the subject of many studies. Road following must support the automatic navigation, at reasonable speed, of mobile robots on irregular paths and roads, with unhomogeneous surfaces and under variable lighting conditions. Civil and military applications of this technology include transportation, logistics, security and engineering. The definition of our lane following system requires an evaluation of the existing technologies. Although the various operational systems converge on a color perception and a region segmentation optimizing discrimination and stability respectively, the treatments and performances vary. In this paper, the robustness of four operational systems and two connected techniques are compared according to common evaluation criteria. We identify typical situations which constitute a basis for the realization of an image database. We describe the process of experimentation conceived for the comparative analysis of performances. The analytical results are useful in order to infer a few optimal combinations of techniques driven by the situations, and to define the present limits of the color perception's validity.

  15. Adaptive Optics Imaging and Spectroscopy of Neptune

    NASA Technical Reports Server (NTRS)

    Johnson, Lindley (Technical Monitor); Sromovsky, Lawrence A.

    2005-01-01

    OBJECTIVES: We proposed to use high spectral resolution imaging and spectroscopy of Neptune in visible and near-IR spectral ranges to advance our understanding of Neptune s cloud structure. We intended to use the adaptive optics (AO) system at Mt. Wilson at visible wavelengths to try to obtain the first groundbased observations of dark spots on Neptune; we intended to use A 0 observations at the IRTF to obtain near-IR R=2000 spatially resolved spectra and near-IR A0 observations at the Keck observatory to obtain the highest spatial resolution studies of cloud feature dynamics and atmospheric motions. Vertical structure of cloud features was to be inferred from the wavelength dependent absorption of methane and hydrogen,

  16. Using Color and Grayscale Images to Teach Histology to Color-Deficient Medical Students

    ERIC Educational Resources Information Center

    Rubin, Lindsay R.; Lackey, Wendy L.; Kennedy, Frances A.; Stephenson, Robert B.

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness") in the…

  17. Image mosaicking based on feature points using color-invariant values

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  18. Dual-color photoacoustic lymph node imaging using nanoformulated naphthalocyanines.

    PubMed

    Lee, Changho; Kim, Jeesu; Zhang, Yumiao; Jeon, Mansik; Liu, Chengbo; Song, Liang; Lovell, Jonathan F; Kim, Chulhong

    2015-12-01

    Demarking lymph node networks is important for cancer staging in clinical practice. Here, we demonstrate in vivo dual-color photoacoustic lymphangiography using all-organic nanoformulated naphthalocyanines (referred to as nanonaps). Nanonap frozen micelles were self-assembled from two different naphthalocyanine dyes with near-infrared absorption at 707 nm or 860 nm. These allowed for noninvasive, nonionizing, high resolution photoacoustic identification of separate lymphatic drainage systems in vivo. With both types of nanonaps, rat lymph nodes buried deeply below an exogenously-placed 10 mm thick layer of chicken breast were clearly visualized in vivo. These results show the potential of multispectral photoacoustic imaging with nanonaps for detailed mapping of lymphatic drainage systems. PMID:26408999

  19. Dual-color photoacoustic lymph node imaging using nanoformulated naphthalocyanines.

    PubMed

    Lee, Changho; Kim, Jeesu; Zhang, Yumiao; Jeon, Mansik; Liu, Chengbo; Song, Liang; Lovell, Jonathan F; Kim, Chulhong

    2015-12-01

    Demarking lymph node networks is important for cancer staging in clinical practice. Here, we demonstrate in vivo dual-color photoacoustic lymphangiography using all-organic nanoformulated naphthalocyanines (referred to as nanonaps). Nanonap frozen micelles were self-assembled from two different naphthalocyanine dyes with near-infrared absorption at 707 nm or 860 nm. These allowed for noninvasive, nonionizing, high resolution photoacoustic identification of separate lymphatic drainage systems in vivo. With both types of nanonaps, rat lymph nodes buried deeply below an exogenously-placed 10 mm thick layer of chicken breast were clearly visualized in vivo. These results show the potential of multispectral photoacoustic imaging with nanonaps for detailed mapping of lymphatic drainage systems.

  20. Color Index Imaging of the Stellar Stream Around NGC 5907

    NASA Astrophysics Data System (ADS)

    Laine, Seppo; Grillmair, Carl J.; Martinez-Delgado, David; Romanowsky, Aaron J.; Capak, Peter; Arendt, Richard G.; Ashby, Matthew; Davies, James E.; Majewski, Steven R.; GaBany, R. Jay

    2015-01-01

    We have obtained deep g, r, and i-band Subaru and ultra-deep 3.6 micron Spitzer/IRAC images of parts of the stellar stream around the nearby edge-on disk galaxy NGC 5907. We report on the color index distribution of the resolved emission along the stream, and indicators of recent star formation associated with the stream. We present scenarios regarding the nature of the disrupted satellite galaxy, based on our data. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work is based in part on data collected with the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. Support for this work was provided by NASA through an award issued by JPL/Caltech.

  1. Color Index Imaging of the Stellar Stream Around NGC 5907

    NASA Astrophysics Data System (ADS)

    Laine, Seppo; Grillmair, Carl J.; Martinez-Delgado, David; Romanowsky, Aaron; Capak, Peter; Arendt, Richard G.; Ashby, M. L. N.; Davies, James; Majewski, Steven; GaBany, R. Jay

    2015-08-01

    We have obtained deep g, r, and i-band Subaru and ultra-deep 3.6 micron Spitzer/IRAC images of parts of the spectacular, multiply-looped stellar stream around the nearby edge-on disk galaxy NGC 5907. We report on the color index distribution of the integrated starlight and the derived stellar populations along the stream. We present scenarios regarding the nature of the disrupted satellite galaxy, based on our data. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work is based in part on data collected with the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. Support for this work was provided by NASA through an award issued by JPL/Caltech.

  2. Cloud screening Coastal Zone Color Scanner images using channel 5

    NASA Technical Reports Server (NTRS)

    Eckstein, B. A.; Simpson, J. J.

    1991-01-01

    Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.

  3. Chromatic adaptation-based tone reproduction for high-dynamic-range imaging

    NASA Astrophysics Data System (ADS)

    Lee, Joohyun; Jeon, Gwanggil; Jeong, Jechang

    2009-10-01

    We present an adaptive tone reproduction algorithm for displaying high-dynamic-range (HDR) images on conventional low-dynamic-range (LDR) display devices. The proposed algorithm consists of an adaptive tone reproduction operator and chromatic adaptation. The algorithm for dynamic range reduction relies on suitable tone reproduction functions that depend on histogram-based parameter estimation to adjust luminance according to global and local features. Instead of relying only on reduction of dynamic range, this chromatic adaption technique also preserves chromatic appearance and color consistency across scene and display environments. Our experimental results demonstrate that the proposed algorithm achieves good subjective quality while preserving image details. Furthermore, the proposed algorithm is simple and practical for implementation.

  4. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies.

    PubMed

    Supple, Megan A; Hines, Heather M; Dasmahapatra, Kanchon K; Lewis, James J; Nielsen, Dahlia M; Lavoie, Christine; Ray, David A; Salazar, Camilo; McMillan, W Owen; Counterman, Brian A

    2013-08-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations.

  5. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies

    PubMed Central

    Supple, Megan A.; Hines, Heather M.; Dasmahapatra, Kanchon K.; Lewis, James J.; Nielsen, Dahlia M.; Lavoie, Christine; Ray, David A.; Salazar, Camilo; McMillan, W. Owen; Counterman, Brian A.

    2013-01-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations. PMID:23674305

  6. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification.

  7. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    NASA Astrophysics Data System (ADS)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  8. Diffusion-Weighted Imaging with Color-Coded Images: Towards a Reduction in Reading Time While Keeping a Similar Accuracy.

    PubMed

    Campos Kitamura, Felipe; de Medeiros Alves, Srhael; Antônio Tobaru Tibana, Luis; Abdala, Nitamar

    2016-01-01

    The aim of this study was to develop a diagnostic tool capable of providing diffusion and apparent diffusion coefficient (ADC) map information in a single color-coded image and to assess the performance of color-coded images compared with their corresponding diffusion and ADC map. The institutional review board approved this retrospective study, which sequentially enrolled 36 head MRI scans. Diffusion-weighted images (DWI) and ADC maps were compared to their corresponding color-coded images. Four raters had their interobserver agreement measured for both conventional (DWI) and color-coded images. Differences between conventional and color-coded images were also estimated for each of the 4 raters. Cohen's kappa and percent agreement were used. Also, paired-samples t-test was used to compare reading time for rater 1. Conventional and color-coded images had substantial or almost perfect agreement for all raters. Mean reading time of rater 1 was 47.4 seconds for DWI and 27.9 seconds for color-coded images (P = .00007). These findings are important because they support the role of color-coded images as being equivalent to that of the conventional DWI in terms of diagnostic capability. Reduction in reading time (which makes the reading easier) is also demonstrated for one rater in this study.

  9. Best Color Image of Jupiter's Little Red Spot

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This amazing color portrait of Jupiter's 'Little Red Spot' (LRS) combines high-resolution images from the New Horizons Long Range Reconnaissance Imager (LORRI), taken at 03:12 UT on February 27, 2007, with color images taken nearly simultaneously by the Wide Field Planetary Camera 2 (WFPC2) on the Hubble Space Telescope. The LORRI images provide details as fine as 9 miles across (15 kilometers), which is approximately 10 times better than Hubble can provide on its own. The improved resolution is possible because New Horizons was only 1.9 million miles (3 million kilometers) away from Jupiter when LORRI snapped its pictures, while Hubble was more than 500 million miles (800 million kilometers) away from the Gas Giant planet.

    The Little Red Spot is the second largest storm on Jupiter, roughly 70% the size of the Earth, and it started turning red in late-2005. The clouds in the Little Red Spot rotate counterclockwise, or in the anticyclonic direction, because it is a high-pressure region. In that sense, the Little Red Spot is the opposite of a hurricane on Earth, which is a low-pressure region - and, of course, the Little Red Spot is far larger than any hurricane on Earth.

    Scientists don't know exactly how or why the Little Red Spot turned red, though they speculate that the change could stem from a surge of exotic compounds from deep within Jupiter, caused by an intensification of the storm system. In particular, sulfur-bearing cloud droplets might have been propelled about 50 kilometers into the upper level of ammonia clouds, where brighter sunlight bathing the cloud tops released the red-hued sulfur embedded in the droplets, causing the storm to turn red. A similar mechanism has been proposed for the Little Red Spot's 'older brother,' the Great Red Spot, a massive energetic storm system that has persisted for over a century.

    New Horizons is providing an opportunity to examine an 'infant' red storm system in detail, which may help scientists

  10. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  11. Alpha-rooting method of color image enhancement by discrete quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2014-02-01

    This paper presents a novel method for color image enhancement based on the discrete quaternion Fourier transform. We choose the quaternion Fourier transform, because it well-suited for color image processing applications, it processes all 3 color components (R,G,B) simultaneously, it capture the inherent correlation between the components, it does not generate color artifacts or blending , finally it does not need an additional color restoration process. Also we introduce a new CEME measure to evaluate the quality of the enhanced color images. Preliminary results show that the α-rooting based on the quaternion Fourier transform enhancement method out-performs other enhancement methods such as the Fourier transform based α-rooting algorithm and the Multi scale Retinex. On top, the new method not only provides true color fidelity for poor quality images but also averages the color components to gray value for balancing colors. It can be used to enhance edge information and sharp features in images, as well as for enhancing even low contrast images. The proposed algorithms are simple to apply and design, which makes them very practical in image enhancement.

  12. Local adaptation and matching habitat choice in female barn owls with respect to melanic coloration.

    PubMed

    Dreiss, A N; Antoniazza, S; Burri, R; Fumagalli, L; Sonnay, C; Frey, C; Goudet, J; Roulin, Alexandre

    2012-01-01

    Local adaptation is a major mechanism underlying the maintenance of phenotypic variation in spatially heterogeneous environments. In the barn owl (Tyto alba), dark and pale reddish-pheomelanic individuals are adapted to conditions prevailing in northern and southern Europe, respectively. Using a long-term dataset from Central Europe, we report results consistent with the hypothesis that the different pheomelanic phenotypes are adapted to specific local conditions in females, but not in males. Compared to whitish females, reddish females bred in sites surrounded by more arable fields and less forests. Colour-dependent habitat choice was apparently beneficial. First, whitish females produced more fledglings when breeding in wooded areas, whereas reddish females when breeding in sites with more arable fields. Second, cross-fostering experiments showed that female nestlings grew wings more rapidly when both their foster and biological mothers were of similar colour. The latter result suggests that mothers should particularly produce daughters in environments that best match their own coloration. Accordingly, whiter females produced fewer daughters in territories with more arable fields. In conclusion, females displaying alternative melanic phenotypes bred in habitats providing them with the highest fitness benefits. Although small in magnitude, matching habitat selection and local adaptation may help maintain variation in pheomelanin coloration in the barn owl. PMID:22070193

  13. Biological versus electronic adaptive coloration: how can one inform the other?

    PubMed Central

    Kreit, Eric; Mäthger, Lydia M.; Hanlon, Roger T.; Dennis, Patrick B.; Naik, Rajesh R.; Forsythe, Eric; Heikenfeld, Jason

    2013-01-01

    Adaptive reflective surfaces have been a challenge for both electronic paper (e-paper) and biological organisms. Multiple colours, contrast, polarization, reflectance, diffusivity and texture must all be controlled simultaneously without optical losses in order to fully replicate the appearance of natural surfaces and vividly communicate information. This review merges the frontiers of knowledge for both biological adaptive coloration, with a focus on cephalopods, and synthetic reflective e-paper within a consistent framework of scientific metrics. Currently, the highest performance approach for both nature and technology uses colourant transposition. Three outcomes are envisioned from this review: reflective display engineers may gain new insights from millions of years of natural selection and evolution; biologists will benefit from understanding the types of mechanisms, characterization and metrics used in synthetic reflective e-paper; all scientists will gain a clearer picture of the long-term prospects for capabilities such as adaptive concealment and signalling. PMID:23015522

  14. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  15. Radar Image with Color as Height, Hariharalaya, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color--from blue to red to yellow to green and back to blue again--represents 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  16. Offset-sparsity decomposition for enhancement of color microscopic image of stained specimen in histopathology: further results

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana

    2016-03-01

    Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].

  17. Radar Image with Color as Height, Sman Teng, Temple, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Cambodia's Angkor region, taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR), reveals a temple (upper-right) not depicted on early 19th Century French archeological survey maps and American topographic maps. The temple, known as 'Sman Teng,' was known to the local Khmer people, but had remained unknown to historians due to the remoteness of its location. The temple is thought to date to the 11th Century: the heyday of Angkor. It is an important indicator of the strategic and natural resource contributions of the area northwest of the capitol, to the urban center of Angkor. Sman Teng, the name designating one of the many types of rice enjoyed by the Khmer, was 'discovered' by a scientist at NASA's Jet Propulsion Laboratory, Pasadena, Calif., working in collaboration with an archaeological expert on the Angkor region. Analysis of this remote area was a true collaboration of archaeology and technology. Locating the temple of Sman Teng required the skills of scientists trained to spot the types of topographic anomalies that only radar can reveal.

    This image, with a pixel spacing of 5 meters (16.4 feet), depicts an area of approximately 5 by 4.7 kilometers (3.1 by 2.9 miles). North is at top. Image brightness is from the P-band (68 centimeters, or 26.8 inches) wavelength radar backscatter, a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 25 meters (82 feet) of elevation change, so going from blue to red to yellow to green and back to blue again corresponds to 25 meters (82 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data

  18. Radar Image with Color as Height, Nokor Pheas Trapeng, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Nokor Pheas Trapeng is the name of the large black rectangular feature in the center-bottom of this image, acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Its Khmer name translates as 'Tank of the City of Refuge'. The immense tank is a typical structure built by the Khmer for water storage and control, but its size is unusually large. This suggests, as does 'city' in its name, that in ancient times this area was far more prosperous than today.

    A visit to this remote, inaccessible site was made in December 1998. The huge water tank was hardly visible. From the radar data we knew that the tank stretched some 500 meters (1,640 feet) from east to west. However, between all the plants growing on the surface of the water and the trees and other vegetation in the area, the water tank blended with the surrounding topography. Among the vegetation, on the northeast of the tank, were remains of an ancient temple and a spirit shrine. So although far from the temples of Angkor, to the southeast, the ancient water structure is still venerated by the local people.

    The image covers an area approximately 9.5 by 8.7 kilometers (5.9 by 5.4 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going from blue to red to yellow to green and back to blue again corresponds to 20 meters (65.6 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate

  19. 32-megapixel dual-color CCD imaging system

    NASA Astrophysics Data System (ADS)

    Stubbs, Christopher W.; Marshall, Stuart; Cook, Kenneth H.; Hills, Robert F.; Noonan, Joseph; Akerlof, Carl W.; Alcock, Charles R.; Axelrod, Timothy S.; Bennett, D.; Dagley, K.; Freeman, K. C.; Griest, Kim; Park, Hye-Sook; Perlmutter, Saul; Peterson, Bruce A.; Quinn, Peter J.; Rodgers, A. W.; Sosin, C.; Sutherland, W. J.

    1993-07-01

    We have developed an astronomical imaging system that incorporates a total of eight 2048 X 2048 pixel CCDs into two focal planes, to allow simultaneous imaging in two colors. Each focal plane comprises four 'edge-buttable' detector arrays, on custom Kovar mounts. The clocking and bias voltage levels for each CCD are independently adjustable, but all the CCDs are operated synchronously. The sixteen analog outputs (two per chip) are measured at 16 bits with commercially available correlated double sampling A/D converters. The resulting 74 MBytes of data per frame are transferred over fiber optic links into dual-ported VME memory. The total readout time is just over one minute. We obtain read noise ranging from 6.5 e- to 10 e- for the various channels when digitizing at 34 Kpixels/sec, with full well depths (MPP mode) of approximately 100,000 e- per 15 micrometers X 15 micrometers pixel. This instrument is currently being used in a search of gravitational microlensing from compact objects in our Galactic halo, using the newly refurbished 1.3 m telescope at the Mt. Stromlo Observatory, Australia.

  20. Aerosol retrieval using Gestationary Ocean Color Imager (GOCI)

    NASA Astrophysics Data System (ADS)

    Kim, J.; Lee, J.; Choi, M.

    2012-12-01

    Hourly aerosol properties in East Asia are retrieved from the first Geostationary Ocean Color Imager (GOCI) launched in June 2010 onboard the Communication, Ocean, and Meteorological Satellite (COMS). A multi-channel algorithm was developed to retrieve aerosol optical depth (AOD), fine-mode fraction (FMF), and aerosol type in 500m×500m resolution. To develop optimized algorithm for the target area of GOCI, optical properties of aerosol are analyzed from extensive observation of AERONET sunphotometers to generate lookup table. Surface reflectance of turbid water is determined from 30-day composite of Rayleigh- and gas corrected reflectance. By applying the present algorithm to top-of-the atmosphere reflectance, three different aerosol cases dominated by anthropogenic aerosol contains black carbon (BC), dust, and non-absorbing aerosol are analyzed to test the algorithm. The algorithm retrieves AOD, and size information together with aerosol type which are consistent with results inferred by RGB image in a qualitative way. The comparison of the retrieved AOD with those of MODIS collection 5 and AERONET sunphotometer observations shows reliable results. Especially, the application of turbid water algorithm significantly increases the accuracy in retrieving AOD at Anmyon station.

  1. Hue-preserving local contrast enhancement and illumination compensation for outdoor color images

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David; Christnacher, Frank

    2015-10-01

    Real-time applications in the field of security and defense use dynamic color camera systems to gain a better understanding of outdoor scenes. To enhance details and improve the visibility in images it is required to per- form local image processing, and to reduce lightness and color inconsistencies between images acquired under different illumination conditions it is required to compensate illumination effects. We introduce an automatic hue-preserving local contrast enhancement and illumination compensation approach for outdoor color images. Our approach is based on a shadow-weighted intensity-based Retinex model which enhances details and compensates the illumination effect on the lightness of an image. The Retinex model exploits information from a shadow detection approach to reduce lightness halo artifacts on shadow boundaries. We employ a hue-preserving color transformation to obtain a color image based on the original color information. To reduce color inconsistencies between images acquired under different illumination conditions we process the saturation using a scaling function. The approach has been successfully applied to static and dynamic color image sequences of outdoor scenes and an experimental comparison with previous Retinex-based approaches has been carried out.

  2. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  3. Color imaging of Mars by the High Resolution Imaging Science Experiment (HiRISE)

    USGS Publications Warehouse

    Delamere, W.A.; Tornabene, L.L.; McEwen, A.S.; Becker, K.; Bergstrom, J.W.; Bridges, N.T.; Eliason, E.M.; Gallagher, D.; Herkenhoff, K. E.; Keszthelyi, L.; Mattson, S.; McArthur, G.K.; Mellon, M.T.; Milazzo, M.; Russell, P.S.; Thomas, N.

    2010-01-01

    HiRISE has been producing a large number of scientifically useful color products of Mars and other planetary objects. The three broad spectral bands, coupled with the highly sensitive 14 bit detectors and time delay integration, enable detection of subtle color differences. The very high spatial resolution of HiRISE can augment the mineralogic interpretations based on multispectral (THEMIS) and hyperspectral datasets (TES, OMEGA and CRISM) and thereby enable detailed geologic and stratigraphic interpretations at meter scales. In addition to providing some examples of color images and their interpretation, we describe the processing techniques used to produce them and note some of the minor artifacts in the output. We also provide an example of how HiRISE color products can be effectively used to expand mineral and lithologic mapping provided by CRISM data products that are backed by other spectral datasets. The utility of high quality color data for understanding geologic processes on Mars has been one of the major successes of HiRISE. ?? 2009 Elsevier Inc.

  4. Color imaging of Mars by the High Resolution Imaging Science Experiment (HiRISE)

    NASA Astrophysics Data System (ADS)

    Delamere, W. Alan; Tornabene, Livio L.; McEwen, Alfred S.; Becker, Kris; Bergstrom, James W.; Bridges, Nathan T.; Eliason, Eric M.; Gallagher, Dennis; Herkenhoff, Kenneth E.; Keszthelyi, Laszlo; Mattson, Sarah; McArthur, Guy K.; Mellon, Michael T.; Milazzo, Moses; Russell, Patrick S.; Thomas, Nicolas

    2010-01-01

    HiRISE has been producing a large number of scientifically useful color products of Mars and other planetary objects. The three broad spectral bands, coupled with the highly sensitive 14 bit detectors and time delay integration, enable detection of subtle color differences. The very high spatial resolution of HiRISE can augment the mineralogic interpretations based on multispectral (THEMIS) and hyperspectral datasets (TES, OMEGA and CRISM) and thereby enable detailed geologic and stratigraphic interpretations at meter scales. In addition to providing some examples of color images and their interpretation, we describe the processing techniques used to produce them and note some of the minor artifacts in the output. We also provide an example of how HiRISE color products can be effectively used to expand mineral and lithologic mapping provided by CRISM data products that are backed by other spectral datasets. The utility of high quality color data for understanding geologic processes on Mars has been one of the major successes of HiRISE.

  5. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases.

  6. Probing the functions of contextual modulation by adapting images rather than observers

    PubMed Central

    Webster, Michael A.

    2014-01-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron’s sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be “pre-adapted” to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  7. Fusion of color Doppler and magnetic resonance images of the heart.

    PubMed

    Wang, Chao; Chen, Ming; Zhao, Jiang-Min; Liu, Yi

    2011-12-01

    This study was designed to establish and analyze color Doppler and magnetic resonance fusion images of the heart, an approach for simultaneous testing of cardiac pathological alterations, performance, and hemodynamics. Ten volunteers were tested in this study. The echocardiographic images were produced by Philips IE33 system and the magnetic resonance images were generated from Philips 3.0-T system. The fusion application was implemented on MATLAB platform utilizing image processing technology. The fusion image was generated from the following steps: (1) color Doppler blood flow segmentation, (2) image registration of color Doppler and magnetic resonance imaging, and (3) image fusion of different image types. The fusion images of color Doppler blood flow and magnetic resonance images were implemented by MATLAB programming in our laboratory. Images and videos were displayed and saved as AVI and JPG. The present study shows that the method we have developed can be used to fuse color flow Doppler and magnetic resonance images of the heart. We believe that the method has the potential to: fill in information missing from the ultrasound or MRI alone, show structures outside the field of view of the ultrasound through MR imaging, and obtain complementary information through the fusion of the two imaging methods (structure from MRI and function from ultrasound). PMID:21656081

  8. Radar Image with Color as Height, Lovea, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Lovea, Cambodia, was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Lovea, the roughly circular feature in the middle-right of the image, rises some 5 meters (16.4 feet) above the surrounding terrain. Lovea is larger than many of the other mound sites with a diameter of greater than 300 meters (984.3 feet). However, it is one of a number highlighted by the radar imagery. The present-day village of Lovea does not occupy all of the elevated area. However, at the center of the mound is an ancient spirit post honoring the legendary founder of the village. The mound is surrounded by earthworks and has vestiges of additional curvilinear features. Today, as in the past, these harnessed water during the rainy season, and conserved it during the long dry months of the year.

    The village of Lovea located on the mound was established in pre-Khmer times, probably before 500 A.D. In the lower left portion of the image is a large trapeng and square moat. These are good examples of construction during the historical 9th to 14th Century A.D. Khmer period; construction that honored and protected earlier circular villages. This suggests a cultural and technical continuity between prehistoric circular villages and the immense urban site of Angkor. This connection is one of the significant finds generated by NASA's radar imaging of Angkor. It shows that the city of Angkor was a particularly Khmer construction. The temple forms and water management structures of Angkor were the result of pre-existing Khmer beliefs and methods of water management.

    Image dimensions are approximately 6.3 by 4.7 kilometers (3.9 by 2.9 miles). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going

  9. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds.

    PubMed

    Hillis, James M; Brainard, David H

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  10. Color Doppler imaging of the retrobulbar vessels in diabetic retinopathy.

    PubMed

    Pauk-Domańska, Magdalena; Walasik-Szemplińska, Dorota

    2014-03-01

    Diabetes is a metabolic disease characterized by elevated blood glucose level due to impaired insulin secretion and activity. Chronic hyperglycemia leads to functional disorders of numerous organs and to their damage. Vascular lesions belong to the most common late complications of diabetes. Microangiopathic lesions can be found in the eyeball, kidneys and nervous system. Macroangiopathy is associated with coronary and peripheral vessels. Diabetic retinopathy is the most common microangiopathic complication characterized by closure of slight retinal blood vessels and their permeability. Despite intensive research, the pathomechanism that leads to the development and progression of diabetic retinopathy is not fully understood. The examinations used in assessing diabetic retinopathy usually involve imaging of the vessels in the eyeball and the retina. Therefore, the examinations include: fluorescein angiography, optical coherence tomography of the retina, B-mode ultrasound imaging, perimetry and digital retinal photography. There are many papers that discuss the correlations between retrobulbar circulation alterations and progression of diabetic retinopathy based on Doppler sonography. Color Doppler imaging is a non-invasive method enabling measurements of blood flow velocities in small vessels of the eyeball. The most frequently assessed vessels include: the ophthalmic artery, which is the first branch of the internal carotid artery, as well as the central retinal vein and artery, and the posterior ciliary arteries. The analysis of hemodynamic alterations in the retrobulbar vessels may deliver important information concerning circulation in diabetes and help to answer the question whether there is a relation between the progression of diabetic retinopathy and the changes observed in blood flow in the vessels of the eyeball. This paper presents the overview of literature regarding studies on blood flow in the vessels of the eyeball in patients with diabetic

  11. Color Doppler imaging of the retrobulbar vessels in diabetic retinopathy

    PubMed Central

    Walasik-Szemplińska, Dorota

    2014-01-01

    Diabetes is a metabolic disease characterized by elevated blood glucose level due to impaired insulin secretion and activity. Chronic hyperglycemia leads to functional disorders of numerous organs and to their damage. Vascular lesions belong to the most common late complications of diabetes. Microangiopathic lesions can be found in the eyeball, kidneys and nervous system. Macroangiopathy is associated with coronary and peripheral vessels. Diabetic retinopathy is the most common microangiopathic complication characterized by closure of slight retinal blood vessels and their permeability. Despite intensive research, the pathomechanism that leads to the development and progression of diabetic retinopathy is not fully understood. The examinations used in assessing diabetic retinopathy usually involve imaging of the vessels in the eyeball and the retina. Therefore, the examinations include: fluorescein angiography, optical coherence tomography of the retina, B-mode ultrasound imaging, perimetry and digital retinal photography. There are many papers that discuss the correlations between retrobulbar circulation alterations and progression of diabetic retinopathy based on Doppler sonography. Color Doppler imaging is a non-invasive method enabling measurements of blood flow velocities in small vessels of the eyeball. The most frequently assessed vessels include: the ophthalmic artery, which is the first branch of the internal carotid artery, as well as the central retinal vein and artery, and the posterior ciliary arteries. The analysis of hemodynamic alterations in the retrobulbar vessels may deliver important information concerning circulation in diabetes and help to answer the question whether there is a relation between the progression of diabetic retinopathy and the changes observed in blood flow in the vessels of the eyeball. This paper presents the overview of literature regarding studies on blood flow in the vessels of the eyeball in patients with diabetic

  12. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  13. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-09-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  14. Digital image color printing on plastics using a slab RF-excited CO2 laser

    NASA Astrophysics Data System (ADS)

    Kawarazaki, Masaru; Sakurada, Noriyo; Ishii, Yoshio; Kubota, Yuzuru; Watanabe, Kazuhiro

    2003-11-01

    An innovative coloring method for whole plastic materials using a laser system (Laser Plastic Coloring: LPC method) has been developed in this work. Irradiating a laser beam to a dye which is diluted with water, the dye solution is heated and a material can be dyed at a laser exposed local area. A CO2 laser was used as a heating source since the absorptivity of a CO2 laser to water is higher than that of the others laser. Using this LPC method, laser color marking and color image expression has been attempted. Four colors of cyan, magenta, yellow, and black (CMYK) figures and characters have been created using laser marking method. By mixing the dots of two colors, another color printing have been created. It makes possible to create more other color printing that the mixing rate of the dots of two colors change variously. From digital image which includes color information, a colorful image expression on a plastic object has been successfully obtained using the LPC method combined with a segmented pixel drawing (SPD) method developed in our laboratory in order to make an artistic drawing.

  15. Electronic imaging aids for night driving: low-light CCD, uncooled thermal IR, and color-fused visible/LWIR

    NASA Astrophysics Data System (ADS)

    Waxman, Allen M.; Savoye, Eugene D.; Fay, David A.; Aguilar, Mario; Gove, Alan N.; Carrick, James E.; Racamato, Joseph P.

    1997-02-01

    MIT Lincoln Laboratory is developing new electronic night vision technologies for defense applications which can be adapted for civilian applications such as night driving aids. These technologies include (1) low-light CCD imagers capable of operating under starlight illumination conditions at video rates, (2) realtime processing of wide dynamic range imagery (visible and IR) to enhance contrast and adaptively compress dynamic range, and (3) realtime fusion of low-light visible and thermal IR imagery to provide color display of the night scene to the operator in order to enhance situational awareness. This paper compares imagery collected during night driving including: low-light CCD visible imagery, intensified-CCD visible imagery, uncooled long-wave IR imagery, cryogenically cooled mid-wave IR imagery, and visible/IR dual-band imagery fused for gray and color display.

  16. Color image segmentation by the vector-valued Allen-Cahn phase-field model: a multigrid solution.

    PubMed

    Kay, David A; Tomasi, Alessandro

    2009-10-01

    We present an efficient numerical solution of a PDE-driven model for color image segmentation and give numerical examples of the results. The method combines the vector-valued Allen-Cahn phase field equation with initial data fitting terms with prescribed interface width and fidelity constants. Efficient numerical solution is achieved using a multigrid splitting of a finite element space, thereby producing an efficient and robust method for the segmentation of large images. We also present the use of adaptive mesh refinement to further speed up the segmentation process.

  17. Application of image quality metamerism to investigate gold color area in cultural property

    NASA Astrophysics Data System (ADS)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2013-01-01

    A concept of image quality metamerism as an expansion of conventional metamerism defined in color science is introduced, and it is applied to segment similar color areas in a cultural property. The image quality metamerism can unify different image quality attributes based on an index showing the degree of image quality metamerism proposed. As a basic research step, the index is consisted of color and texture information and examined to investigate a cultural property. The property investigated is a pair of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold-colored areas painted by using high granularity colorants compared with other color areas are evaluated based on the image quality metamerism index locally, then the index is visualized as a map showing the possibility of the image quality metamer to the reference pixel set in the same image. This visualization means a segmentation of areas where color is similar but texture is different. The experimental result showed that the proposed method was effective to show areas of gold color areas in the property.

  18. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  19. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators

    PubMed Central

    Chiao, Chuan-Chin; Wickiser, J. Kenneth; Allen, Justine J.; Genter, Brock; Hanlon, Roger T.

    2011-01-01

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision. PMID:21576487

  20. Brightness, lightness, and specifying color in high-dynamic-range scenes and images

    NASA Astrophysics Data System (ADS)

    Fairchild, Mark D.; Chen, Ping-Hsu

    2011-01-01

    Traditional color spaces have been widely used in a variety of applications including digital color imaging, color image quality, and color management. These spaces, however, were designed for the domain of color stimuli typically encountered with reflecting objects and image displays of such objects. This means the domain of stimuli with luminance levels from slightly above zero to that of a perfect diffuse white (or display white point). This limits the applicability of such spaces to color problems in HDR imaging. This is caused by their hard intercepts at zero luminance/lightness and by their uncertain applicability for colors brighter than diffuse white. To address HDR applications, two new color spaces were recently proposed, hdr-CIELAB and hdr-IPT. They are based on replacing the power-function nonlinearities in CIELAB and IPT with more physiologically plausible hyperbolic functions optimized to most closely simulate the original color spaces in the diffuse reflecting color domain. This paper presents the formulation of the new models, evaluations using Munsell data in comparison with CIELAB, IPT, and CIECAM02, two sets of lightness-scaling data above diffuse white, and various possible formulations of hdr-CIELAB and hdr-IPT to predict the visual results.

  1. Real-time adaptive video image enhancement

    NASA Astrophysics Data System (ADS)

    Garside, John R.; Harrison, Chris G.

    1999-07-01

    As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.

  2. Color Image of Death Valley, California from SIR-C

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This radar image shows the area of Death Valley, California and the different surface types in the area. Radar is sensitive to surface roughness with rough areas showing up brighter than smooth areas, which appear dark. This is seen in the contrast between the bright mountains that surround the dark, smooth basins and valleys of Death Valley. The image shows Furnace Creek alluvial fan (green crescent feature) at the far right, and the sand dunes near Stove Pipe Wells at the center. Alluvial fans are gravel deposits that wash down from the mountains over time. Several other alluvial fans (semicircular features) can be seen along the mountain fronts in this image. The dark wrench-shaped feature between Furnace Creek fan and the dunes is a smooth flood-plain which encloses Cottonball Basin. Elevations in the valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using these radar data to help answer a number of different questions about Earth's geology including how alluvial fans form and change through time in response to climatic changes and earthquakes. The image is centered at 36.629 degrees north latitude, 117.069 degrees west longitude. Colors in the image represent different radar channels as follows: red =L-band horizontally polarized transmitted, horizontally polarized received (LHH); green =L-band horizontally transmitted, vertically received (LHV) and blue = CHV.

    SIR-C/X-SAR is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground

  3. Plasmonics-Based Multifunctional Electrodes for Low-Power-Consumption Compact Color-Image Sensors.

    PubMed

    Lin, Keng-Te; Chen, Hsuen-Li; Lai, Yu-Sheng; Chi, Yi-Min; Chu, Ting-Wei

    2016-03-01

    High pixel density, efficient color splitting, a compact structure, superior quantum efficiency, and low power consumption are all important features for contemporary color-image sensors. In this study, we developed a surface plasmonics-based color-image sensor displaying a high photoelectric response, a microlens-free structure, and a zero-bias working voltage. Our compact sensor comprised only (i) a multifunctional electrode based on a single-layer structured aluminum (Al) film and (ii) an underlying silicon (Si) substrate. This approach significantly simplifies the device structure and fabrication processes; for example, the red, green, and blue color pixels can be prepared simultaneously in a single lithography step. Moreover, such Schottky-based plasmonic electrodes perform multiple functions, including color splitting, optical-to-electrical signal conversion, and photogenerated carrier collection for color-image detection. Our multifunctional, electrode-based device could also avoid the interference phenomenon that degrades the color-splitting spectra found in conventional color-image sensors. Furthermore, the device took advantage of the near-field surface plasmonic effect around the Al-Si junction to enhance the optical absorption of Si, resulting in a significant photoelectric current output even under low-light surroundings and zero bias voltage. These plasmonic Schottky-based color-image devices could convert a photocurrent directly into a photovoltage and provided sufficient voltage output for color-image detection even under a light intensity of only several femtowatts per square micrometer. Unlike conventional color image devices, using voltage as the output signal decreases the area of the periphery read-out circuit because it does not require a current-to-voltage conversion capacitor or its related circuit. Therefore, this strategy has great potential for direct integration with complementary metal-oxide-semiconductor (CMOS)-compatible circuit

  4. Plasmonics-Based Multifunctional Electrodes for Low-Power-Consumption Compact Color-Image Sensors.

    PubMed

    Lin, Keng-Te; Chen, Hsuen-Li; Lai, Yu-Sheng; Chi, Yi-Min; Chu, Ting-Wei

    2016-03-01

    High pixel density, efficient color splitting, a compact structure, superior quantum efficiency, and low power consumption are all important features for contemporary color-image sensors. In this study, we developed a surface plasmonics-based color-image sensor displaying a high photoelectric response, a microlens-free structure, and a zero-bias working voltage. Our compact sensor comprised only (i) a multifunctional electrode based on a single-layer structured aluminum (Al) film and (ii) an underlying silicon (Si) substrate. This approach significantly simplifies the device structure and fabrication processes; for example, the red, green, and blue color pixels can be prepared simultaneously in a single lithography step. Moreover, such Schottky-based plasmonic electrodes perform multiple functions, including color splitting, optical-to-electrical signal conversion, and photogenerated carrier collection for color-image detection. Our multifunctional, electrode-based device could also avoid the interference phenomenon that degrades the color-splitting spectra found in conventional color-image sensors. Furthermore, the device took advantage of the near-field surface plasmonic effect around the Al-Si junction to enhance the optical absorption of Si, resulting in a significant photoelectric current output even under low-light surroundings and zero bias voltage. These plasmonic Schottky-based color-image devices could convert a photocurrent directly into a photovoltage and provided sufficient voltage output for color-image detection even under a light intensity of only several femtowatts per square micrometer. Unlike conventional color image devices, using voltage as the output signal decreases the area of the periphery read-out circuit because it does not require a current-to-voltage conversion capacitor or its related circuit. Therefore, this strategy has great potential for direct integration with complementary metal-oxide-semiconductor (CMOS)-compatible circuit

  5. Artificial frame filling using adaptive neural fuzzy inference system for particle image velocimetry dataset

    NASA Astrophysics Data System (ADS)

    Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer

    2015-03-01

    Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.

  6. Image bleed in color ink-jet printing of plain paper

    NASA Astrophysics Data System (ADS)

    Barker, Lesley J.; dePierne, Otto S.; Proverb, Robert J.; Wasser, Richard B.

    1994-05-01

    The bleed of one color into another is detrimental to perceived print quality of color-printed images, and is one of the problems encountered in ink-jet color printing. Rapid absorption of ink dye and vehicle into the paper acts to prevent coalescence of color droplets, but too strong an absorption of the vehicle along the paper fibers causes spreading and feathering of the image boundary. The process is therefore very delicate and sensitive to the physical and chemical characteristics of the paper surface. In this work, color bleed of characters printed on experimental sheets by an HP 500C DeskJet printer was measured quantitatively by image analysis. The effects of variation of internal sizing on color bleed and color optical density were measured, as well as effects resulting from surface treatments with different levels of starch and polymeric surface size. Results were compared with analogous measurements for printing without an adjacent color, and also for black ink printing on the same paper. The level of starch in the surface treatment was most important in controlling color bleed, whereas surface size was most helpful in preventing image spread in black ink printing, and in increasing the optical density of both black and composite black images.

  7. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  8. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  9. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  10. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  11. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  12. Estimation of color modification in digital images by CFA pattern change.

    PubMed

    Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-03-10

    Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy.

  13. Sparse Non-negative Matrix Factorization (SNMF) based color unmixing for breast histopathological image analysis.

    PubMed

    Xu, Jun; Xiang, Lei; Wang, Guanhao; Ganesan, Shridar; Feldman, Michael; Shih, Natalie N C; Gilmore, Hannah; Madabhushi, Anant

    2015-12-01

    Color deconvolution has emerged as a popular method for color unmixing as a pre-processing step for image analysis of digital pathology images. One deficiency of this approach is that the stain matrix is pre-defined which requires specific knowledge of the data. This paper presents an unsupervised Sparse Non-negative Matrix Factorization (SNMF) based approach for color unmixing. We evaluate this approach for color unmixing of breast pathology images. Compared to Non-negative Matrix Factorization (NMF), the sparseness constraint imposed on coefficient matrix aims to use more meaningful representation of color components for separating stained colors. In this work SNMF is leveraged for decomposing pure stained color in both Immunohistochemistry (IHC) and Hematoxylin and Eosin (H&E) images. SNMF is compared with Principle Component Analysis (PCA), Independent Component Analysis (ICA), Color Deconvolution (CD), and Non-negative Matrix Factorization (NMF) based approaches. SNMF demonstrated improved performance in decomposing brown diaminobenzidine (DAB) component from 36 IHC images as well as accurately segmenting about 1400 nuclei and 500 lymphocytes from H & E images.

  14. Sparse Non-negative Matrix Factorization (SNMF) based color unmixing for breast histopathological image analysis.

    PubMed

    Xu, Jun; Xiang, Lei; Wang, Guanhao; Ganesan, Shridar; Feldman, Michael; Shih, Natalie N C; Gilmore, Hannah; Madabhushi, Anant

    2015-12-01

    Color deconvolution has emerged as a popular method for color unmixing as a pre-processing step for image analysis of digital pathology images. One deficiency of this approach is that the stain matrix is pre-defined which requires specific knowledge of the data. This paper presents an unsupervised Sparse Non-negative Matrix Factorization (SNMF) based approach for color unmixing. We evaluate this approach for color unmixing of breast pathology images. Compared to Non-negative Matrix Factorization (NMF), the sparseness constraint imposed on coefficient matrix aims to use more meaningful representation of color components for separating stained colors. In this work SNMF is leveraged for decomposing pure stained color in both Immunohistochemistry (IHC) and Hematoxylin and Eosin (H&E) images. SNMF is compared with Principle Component Analysis (PCA), Independent Component Analysis (ICA), Color Deconvolution (CD), and Non-negative Matrix Factorization (NMF) based approaches. SNMF demonstrated improved performance in decomposing brown diaminobenzidine (DAB) component from 36 IHC images as well as accurately segmenting about 1400 nuclei and 500 lymphocytes from H & E images. PMID:25958195

  15. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  16. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  17. Application of the airborne ocean color imager for commercial fishing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.

    1993-01-01

    The objective of the investigation was to develop a commercial remote sensing system for providing near-real-time data (within one day) in support of commercial fishing operations. The Airborne Ocean Color Imager (AOCI) had been built for NASA by Daedalus Enterprises, Inc., but it needed certain improvements, data processing software, and a delivery system to make it into a commercial system for fisheries. Two products were developed to support this effort: the AOCI with its associated processing system and an information service for both commercial and recreational fisheries to be created by Spectro Scan, Inc. The investigation achieved all technical objectives: improving the AOCI, creating software for atmospheric correction and bio-optical output products, georeferencing the output products, and creating a delivery system to get those products into the hands of commercial and recreational fishermen in near-real-time. The first set of business objectives involved Daedalus Enterprises and also were achieved: they have an improved AOCI and new data processing software with a set of example data products for fisheries applications to show their customers. Daedalus' marketing activities showed the need for simplification of the product for fisheries, but they successfully marketed the current version to an Italian consortium. The second set of business objectives tasked Spectro Scan to provide an information service and they could not be achieved because Spectro Scan was unable to obtain necessary venture capital to start up operations.

  18. Private anonymous fingerprinting for color images in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Abdul, W.; Gaborit, P.; Carré, P.

    2010-01-01

    An online buyer of multimedia content does not want to reveal his identity or his choice of multimedia content whereas the seller or owner of the content does not want the buyer to further distribute the content illegally. To address these issues we present a new private anonymous fingerprinting protocol. It is based on superposed sending for communication security, group signature for anonymity and traceability and single database private information retrieval (PIR) to allow the user to get an element of the database without giving any information about the acquired element. In the presence of a semi-honest model, the protocol is implemented using a blind, wavelet based color image watermarking scheme. The main advantage of the proposed protocol is that both the user identity and the acquired database element are unknown to any third party and in the case of piracy, the pirate can be identified using the group signature scheme. The robustness of the watermarking scheme against Additive White Gaussian Noise is also shown.

  19. Mars Color Imager (MARCI) on the Mars Climate Orbiter

    USGS Publications Warehouse

    Malin, M.C.; Bell, J.F.; Calvin, W.; Clancy, R.T.; Haberle, R.M.; James, P.B.; Lee, S.W.; Thomas, P.C.; Caplinger, M.A.

    2001-01-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (???6 x 6 x 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 1/3 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6?? FOV, which covers ???40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals. Copyright 2001 by the American

  20. Voyager 2 Color Image of Enceladus, Almost Full Disk

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This color Voyager 2 image mosaic shows the water-ice-covered surface of Enceladus, one of Saturn's icy moons. Enceladus' diameter of just 500 km would fit across the state of Arizona, yet despite its small size Enceladus exhibits one of the most interesting surfaces of all the icy satellites. Enceladus reflects about 90% of the incident sunlight (about like fresh-fallen snow), placing it among the most reflective objects in the Solar System. Several geologic terrains have superposed crater densities that span a factor of at least 500, thereby indicating huge differences in the ages of these terrains. It is possible that the high reflectivity of Enceladus' surface results from continuous deposition of icy particles from Saturn's E-ring, which in fact may originate from icy volcanoes on Enceladus' surface. Some terrains are dominated by sinuous mountain ridges from 1 to 2 km high (3300 to 6600 feet), whereas other terrains are scarred by linear cracks, some of which show evidence for possible sideways fault motion such as that of California's infamous San Andreas fault. Some terrains appear to have formed by separation of icy plates along cracks, and other terrains are exceedingly smooth at the resolution of this image. The implication carried by Enceladus' surface is that this tiny ice ball has been geologically active and perhaps partially liquid in its interior for much of its history. The heat engine that powers geologic activity here is thought to be elastic deformation caused by tides induced by Enceladus' orbital motion around Saturn and the motion of another moon, Dione.

  1. Iterative color constancy with temporal filtering for an image sequence with no relative motion between the camera and the scene.

    PubMed

    Simão, Josemar; Jörg Andreas Schneebeli, Hans; Vassallo, Raquel Frizera

    2015-11-01

    Color constancy is the ability to perceive the color of a surface as invariant even under changing illumination. In outdoor applications, such as mobile robot navigation or surveillance, the lack of this ability harms the segmentation, tracking, and object recognition tasks. The main approaches for color constancy are generally targeted to static images and intend to estimate the scene illuminant color from the images. We present an iterative color constancy method with temporal filtering applied to image sequences in which reference colors are estimated from previous corrected images. Furthermore, two strategies to sample colors from the images are tested. The proposed method has been tested using image sequences with no relative movement between the scene and the camera. It also has been compared with known color constancy algorithms such as gray-world, max-RGB, and gray-edge. In most cases, the iterative color constancy method achieved better results than the other approaches. PMID:26560917

  2. Analyzing visual enjoyment of color: using female nude digital Image as example

    NASA Astrophysics Data System (ADS)

    Chin, Sin-Ho

    2014-04-01

    This research adopts three primary colors and their three mixed colors as main color hue variances by changing the background of a female nude digital image. The color saturation variation is selected to 9S as high saturation and 3S as low saturation of PCCS. And the color tone elements are adopted in 3.5 as low brightness, 5.5 as medium brightness for primary color, and 7.5 as low brightness. The water-color brush stroke used for two female body digital images which consisting of a visual pleasant image with elegant posture and another unpleasant image with stiff body language, is to add the visual intimacy. Results show the brightness of color is the main factor impacting visual enjoyment, followed by saturation. Explicitly, high-brightness with high saturation gains the highest rate of enjoyment, high-saturation medium brightness (primary color) the second, and high-brightness with low saturation the third, and low-brightness with low saturation the least.

  3. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  4. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  5. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2004-12-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  6. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2005-01-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  7. Segmentation and classification of burn images by color and texture information.

    PubMed

    Acha, Begoña; Serrano, Carmen; Acha, José I; Roa, Laura M

    2005-01-01

    In this paper, a burn color image segmentation and classification system is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths). Digital color photographs are used as inputs to the system. The system is based on color and texture information, since these are the characteristics observed by physicians in order to form a diagnosis. A perceptually uniform color space (L*u*v*) was used, since Euclidean distances calculated in this space correspond to perceptual color differences. After the burn is segmented, a set of color and texture features is calculated that serves as the input to a Fuzzy-ARTMAP neural network. The neural network classifies burns into three types of burn depths: superficial dermal, deep dermal, and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images, yielding an average classification success rate of 82%.

  8. Segmentation and classification of burn images by color and texture information.

    PubMed

    Acha, Begoña; Serrano, Carmen; Acha, José I; Roa, Laura M

    2005-01-01

    In this paper, a burn color image segmentation and classification system is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths). Digital color photographs are used as inputs to the system. The system is based on color and texture information, since these are the characteristics observed by physicians in order to form a diagnosis. A perceptually uniform color space (L*u*v*) was used, since Euclidean distances calculated in this space correspond to perceptual color differences. After the burn is segmented, a set of color and texture features is calculated that serves as the input to a Fuzzy-ARTMAP neural network. The neural network classifies burns into three types of burn depths: superficial dermal, deep dermal, and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images, yielding an average classification success rate of 82%. PMID:16229658

  9. Color image authentication scheme via multispectral photon-counting double random phase encoding

    NASA Astrophysics Data System (ADS)

    Moon, Inkyu

    2015-05-01

    In this paper, we present an overview of a color image authentication scheme via multispectral photon-counting (MPCI) double random phase encoding (DRPE). The MPCI makes image sparse distributed and DRPE lets image be stationary white noise which make intruder attacks difficult. In this method, the original RGB image is down-sampled into Bayer image and then be encrypted with DRPE. The encrypted image is photon-counted and transmitted on internet channel. For image authentication, the decrypted Bayer image is interpolated into RBC image with demosaicing algorithm. Experimental results show that the decrypted image is not visually recognized under low light level but can be verified with nonlinear correlation algorithm.

  10. Color images of Kansas subsurface geology from well logs

    USGS Publications Warehouse

    Collins, D.R.; Doveton, J.H.

    1986-01-01

    Modern wireline log combinations give highly diagnostic information that goes beyond the basic shale content, pore volume, and fluid saturation of older logs. Pattern recognition of geology from logs is made conventionally through either the examination of log overlays or log crossplots. Both methods can be combined through the use of color as a medium of information by setting the three color primaries of blue, green, and red light as axes of three dimensional color space. Multiple log readings of zones are rendered as composite color mixtures which, when plotted sequentially with depth, show lithological successions in a striking manner. The method is extremely simple to program and display on a color monitor. Illustrative examples are described from the Kansas subsurface. ?? 1986.

  11. Optical color image encryption based on an asymmetric cryptosystem in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Chen, Xudong

    2011-08-01

    In recent years, optical color image encryption has attracted much attention in the information security field. Some approaches, such as digital holography, have been proposed to encrypt color images, but the previously proposed methods are developed based on optical symmetric cryptographic strategies. In this paper, we apply an optical asymmetric cryptosystem for the color image encryption instead of conventional symmetric cryptosystems. A phase-truncated strategy is applied in the Fresnel domain, and multiple-wavelength and indexed image methods are further employed. The security of optical asymmetric cryptosystem is also analyzed during the decryption. Numerical results are presented to demonstrate the feasibility and effectiveness of the proposed optical asymmetric cryptosystem for color image encryption.

  12. Optical color image hiding scheme based on chaotic mapping and Hartley transform

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjun; Zhang, Yu; Liu, Wei; Meng, Fanyi; Wu, Qun; Liu, Shutian

    2013-08-01

    We present a color image encryption algorithm by using chaotic mapping and Hartley transform. The three components of color image are scrambled by Baker mapping. The coordinates composed of the scrambled monochrome components are converted from Cartesian coordinates to spherical coordinates. The data of azimuth angle is normalized and regarded as the key. The data of radii and zenith angle are encoded under the help of optical Hartley transform with scrambled key. An electro-optical encryption structure is designed. The final encrypted image is constituted by two selected color components of output in real number domain.

  13. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  14. Use of ultrasound, color Doppler imaging and radiography to monitor periapical healing after endodontic surgery.

    PubMed

    Tikku, Aseem P; Kumar, Sunil; Loomba, Kapil; Chandra, Anil; Verma, Promila; Aggarwal, Renu

    2010-09-01

    This study evaluated the effectiveness of ultrasound, color Doppler imaging and conventional radiography in monitoring the post-surgical healing of periapical lesions of endodontic origin. Fifteen patients who underwent periapical surgery for endodontic pathology were randomly selected. In all patients, periapical lesions were evaluated preoperatively using ultrasound, color Doppler imaging and conventional radiography, to analyze characteristics such as size, shape and dimensions. On radiographic evaluation, dimensions were measured in the superoinferior and mesiodistal direction using image-analysis software. Ultrasound evaluation was used to measure the changes in shape and dimensions on the anteroposterior, superoinferior, and mesiodistal planes. Color Doppler imaging was used to detect the blood-flow velocity. Postoperative healing was monitored in all patients at 1 week and 6 months by using ultrasound and color Doppler imaging, together with conventional radiography. The findings were then analyzed to evaluate the effectiveness of the 3 imaging techniques. At 6 months, ultrasound and color Doppler imaging were significantly better than conventional radiography in detecting changes in the healing of hard tissue at the surgical site (P < 0.004). This study demonstrates that ultrasound and color Doppler imaging have the potential to supplement conventional radiography in monitoring the post-surgical healing of periapical lesions of endodontic origin.

  15. True color blood flow imaging using a high-speed laser photography system

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi

    2012-10-01

    Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.

  16. Rapid production of structural color images with optical data storage capabilities

    NASA Astrophysics Data System (ADS)

    Rezaei, Mohamad; Jiang, Hao; Qarehbaghi, Reza; Naghshineh, Mohammad; Kaminska, Bozena

    2015-03-01

    In this paper, we present novel methods to produce structural color image for any given color picture using a pixelated generic stamp named nanosubstrate. The nanosubstrate is composed of prefabricated arrays of red, green and blue subpixels. Each subpixel has nano-gratings and/or sub-wavelength structures which give structural colors through light diffraction. Micro-patterning techniques were implemented to produce the color images from the nanosubstrate by selective activation of subpixels. The nano-grating structures can be nanohole arrays, which after replication are converted to nanopillar arrays or vice versa. It has been demonstrated that visible and invisible data can be easily stored using these fabrication methods and the information can be easily read. Therefore the techniques can be employed to produce personalized and customized color images for applications in optical document security and publicity, and can also be complemented by combined optical data storage capabilities.

  17. Tone reproduction for high-dynamic range imaging based on adaptive filtering

    NASA Astrophysics Data System (ADS)

    Ha, Changwoo; Lee, Joohyun; Jeong, Jechang

    2014-03-01

    A tone reproduction algorithm with enhanced contrast of high-dynamic range images on conventional low-dynamic range display devices is presented. The proposed algorithm consists mainly of block-based parameter estimation, a characteristic-based luminance adjustment, and an adaptive Gaussian filter using minimum description length. Instead of relying only on the reduction of the dynamic range, a characteristic-based luminance adjustment process modifies the luminance values. The Gaussian-filtered luminance value is obtained from appropriate value of variance, and the contrast is then enhanced through the use of a relation between the adjusted luminance and Gaussian-filtered luminance values. In the final tone-reproduction process, the proposed algorithm combines color and luminance components in order to preserve the color consistency. The experimental results demonstrate that the proposed algorithm achieves a good subjective quality while enhancing the contrast of the image details.

  18. [Pseudo-color filter in two-dimensional imaging in dentistry].

    PubMed

    Kats, L; Vered, M

    2014-10-01

    Most digital systems that are currently used in two-dimensional imaging in dentistry provide opportunities for different image processing filters. One possible means of enhancing digital radiographic image is pseudocoloring (i.e., color conversion of gray-scale images). Recently, this method has become widely used in digital radiology. The human eye is more sensitive to differences in color than to differences in shades of gray. Theoretically, converting a gray scale intensity level of a digital image into colors could enhance the radiographic information. There have been some studies that applied pseudocoloring of digital radiographic images for the detection of caries and periodontal defects. However, thus far, this method failed to show a significantly improved ability for the detection of these lesions. Further investigations are necessary in order to develop specific algorithms that will increase the validity of pseudocoloring in two-dimensional imaging in dentistry.

  19. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    NASA Astrophysics Data System (ADS)

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A.; Wong, Alexander

    2016-06-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.

  20. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling.

    PubMed

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A; Wong, Alexander

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging. PMID:27346434

  1. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    PubMed Central

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A.; Wong, Alexander

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging. PMID:27346434

  2. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling.

    PubMed

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A; Wong, Alexander

    2016-06-27

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.

  3. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  4. Principles of image processing in machine vision systems for the color analysis of minerals

    NASA Astrophysics Data System (ADS)

    Petukhova, Daria B.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Korotaev, Valery V.

    2014-09-01

    At the moment color sorting method is one of promising methods of mineral raw materials enrichment. This method is based on registration of color differences between images of analyzed objects. As is generally known the problem with delimitation of close color tints when sorting low-contrast minerals is one of the main disadvantages of color sorting method. It is can be related with wrong choice of a color model and incomplete image processing in machine vision system for realizing color sorting algorithm. Another problem is a necessity of image processing features reconfiguration when changing the type of analyzed minerals. This is due to the fact that optical properties of mineral samples vary from one mineral deposit to another. Therefore searching for values of image processing features is non-trivial task. And this task doesn't always have an acceptable solution. In addition there are no uniform guidelines for determining criteria of mineral samples separation. It is assumed that the process of image processing features reconfiguration had to be made by machine learning. But in practice it's carried out by adjusting the operating parameters which are satisfactory for one specific enrichment task. This approach usually leads to the fact that machine vision system unable to estimate rapidly the concentration rate of analyzed mineral ore by using color sorting method. This paper presents the results of research aimed at addressing mentioned shortcomings in image processing organization for machine vision systems which are used to color sorting of mineral samples. The principles of color analysis for low-contrast minerals by using machine vision systems are also studied. In addition, a special processing algorithm for color images of mineral samples is developed. Mentioned algorithm allows you to determine automatically the criteria of mineral samples separation based on an analysis of representative mineral samples. Experimental studies of the proposed algorithm

  5. Visual Fatigue in Congenital Nystagmus Caused by Viewing Images of Color Sequential Projectors

    NASA Astrophysics Data System (ADS)

    Ogata, Masaki; Ukai, Kazuhiko; Kawai, Takashi

    2005-12-01

    Color breakup is the perceived splitting of the white portions of an image into its red, green, and blue components when the image is projected with the color sequential method and the viewer is moving his or her eyes. This study aims to evaluate how color breakup affects symptoms of visual fatigue in people with congenital nystagmus. The eyes of people with congenital nystagmus continuously oscillate leading to color breakup without pause. One in every 1 500 persons is afflicted with congenital nystagmus. Many sufferers have almost no symptoms in daily life except for a mild deterioration of visual acuity. Five subjects with congenital nystagmus were shown a 15-min portion of a movie projected with three video projectors (one liquid cyrstal display (LCD) projector and two single-chip digital light processing (DLP) projectors). They were subjectively evaluated both pre-and post-viewing with a questionnaire listing visual fatigue symptoms. One subject was tested in an additional experiment using six more projectors. Results indicated that subjects with congenital nystagmus felt severe visual fatigue after they viewed images produced by color sequential projectors. Mechanism of the cause of visual fatigue is not clear in general and in color breakup in congenital nystagmus, however, it was clear that people with nystagmus felt continuing color breakup as a flickering image. Flickering light is a major cause of visual fatigue. Color sequential projectors are best avoided in public settings, such as classrooms, lecture theaters and conference sites.

  6. Empirical comparison of color normalization methods for epithelial-stromal classification in H and E images

    PubMed Central

    Sethi, Amit; Sha, Lingdao; Vahadane, Abhishek Ramnath; Deaton, Ryan J.; Kumar, Neeraj; Macias, Virgilia; Gann, Peter H.

    2016-01-01

    Context: Color normalization techniques for histology have not been empirically tested for their utility for computational pathology pipelines. Aims: We compared two contemporary techniques for achieving a common intermediate goal – epithelial-stromal classification. Settings and Design: Expert-annotated regions of epithelium and stroma were treated as ground truth for comparing classifiers on original and color-normalized images. Materials and Methods: Epithelial and stromal regions were annotated on thirty diverse-appearing H and E stained prostate cancer tissue microarray cores. Corresponding sets of thirty images each were generated using the two color normalization techniques. Color metrics were compared for original and color-normalized images. Separate epithelial-stromal classifiers were trained and compared on test images. Main analyses were conducted using a multiresolution segmentation (MRS) approach; comparative analyses using two other classification approaches (convolutional neural network [CNN], Wndchrm) were also performed. Statistical Analysis: For the main MRS method, which relied on classification of super-pixels, the number of variables used was reduced using backward elimination without compromising accuracy, and test - area under the curves (AUCs) were compared for original and normalized images. For CNN and Wndchrm, pixel classification test-AUCs were compared. Results: Khan method reduced color saturation while Vahadane reduced hue variance. Super-pixel-level test-AUC for MRS was 0.010–0.025 (95% confidence interval limits ± 0.004) higher for the two normalized image sets compared to the original in the 10–80 variable range. Improvement in pixel classification accuracy was also observed for CNN and Wndchrm for color-normalized images. Conclusions: Color normalization can give a small incremental benefit when a super-pixel-based classification method is used with features that perform implicit color normalization while the gain is

  7. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  8. False-Color-Image Map of Quadrangle 3166, Jaldak (701) and Maruf-Nawa (702) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  9. False-Color-Image Map of Quadrangle 3462, Herat (409) and Chesht-Sharif (410) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  10. False-Color-Image Map of Quadrangle 3364, Pasa-Band (417) and Kejran (418) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  11. False-Color-Image Map of Quadrangle 3362, Shin-Dand (415) and Tulak (416) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  12. False-Color-Image Map of Quadrangle 3466, Lal-Sarjangal (507) and Bamyan (508) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  13. False-Color-Image Map of Quadrangle 3670, Jarm-Keshem (223) and Zebak (224) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  14. False-Color-Image Map of Quadrangle 3570, Tagab-E-Munjan (505) and Asmar-Kamdesh (506) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  15. False-Color-Image Map of Quadrangle 3262, Farah (421) and Hokumat-E-Pur-Chaman (422) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  16. False-Color-Image Map of Quadrangle 3566, Sang-Charak (501) and Sayghan-O-Kamard (502) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  17. False-Color-Image Map of Quadrangle 3264, Nawzad-Musa-Qala (423) and Dehrawat (424) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  18. False-Color-Image Map of Quadrangle 3468, Chak Wardak-Syahgerd (509) and Kabul (510) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  19. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  20. Accurate color synthesis of three-dimensional objects in an image.

    PubMed

    Xin, John H; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing. PMID:15139423

  1. Gray-scale and color optical encryption based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Tanha, Mehrdad; Kheradmand, Reza; Ahmadi-Kandjani, Sohrab

    2012-09-01

    We propose two approaches for optical encryption based on computational ghost imaging. These methods have the capability of encoding ghost images reconstructed from gray-scale images and colored objects. We experimentally demonstrate our approaches under eavesdropping in two different setups, thereby proving the robustness and simplicity thereof for encryption compared with previous algorithms.

  2. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2016-07-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  3. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the

  4. Coherent Image Layout using an Adaptive Visual Vocabulary

    SciTech Connect

    Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.; Gosink, Luke J.

    2013-03-06

    When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we are able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.

  5. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  6. Color enhancement of highly correlated images. I - Decorrelation and HSI contrast stretches. [hue saturation intensity

    NASA Technical Reports Server (NTRS)

    Gillespie, Alan R.; Kahle, Anne B.; Walker, Richard E.

    1986-01-01

    Conventional enhancements for the color display of multispectral images are based on independent contrast modifications or 'stretches' of three input images. This approach is not effective if the image channels are highly correlated or if the image histograms are strongly bimodal or more complex. Any of several procedures that tend to 'stretch' color saturation while leaving hue unchanged may better utilize the full range of colors for the display of image information. Two conceptually different enhancements are discussed: the 'decorrelation stretch', based on principal-component (PC) analysis, and the 'stretch' of 'hue' - 'saturation' - intensity (HSI) transformed data. The PC transformation in scene-dependent, but the HSI transformation is invariant. Examples of images enhanced by conventional linear stretches, decorrelation stretch, and by stretches of HSI transformed data are compared. Schematic variation diagrams or two- and three-dimensional histograms are used to illustrate the 'decorrelation stretch' method and the effect of the different enhancements.

  7. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-06-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  8. Bayer patterned high dynamic range image reconstruction using adaptive weighting function

    NASA Astrophysics Data System (ADS)

    Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi

    2014-12-01

    It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

  9. Seed viability detection using computerized false-color radiographic image enhancement

    NASA Technical Reports Server (NTRS)

    Vozzo, J. A.; Marko, Michael

    1994-01-01

    Seed radiographs are divided into density zones which are related to seed germination. The seeds which germinate have densities relating to false-color red. In turn, a seed sorter may be designed which rejects those seeds not having sufficient red to activate a gate along a moving belt containing the seed source. This results in separating only seeds with the preselected densities representing biological viability lending to germination. These selected seeds demand a higher market value. Actual false-coloring isn't required for a computer to distinguish the significant gray-zone range. This range can be predetermined and screened without the necessity of red imaging. Applying false-color enhancement is a means of emphasizing differences in densities of gray within any subject from photographic, radiographic, or video imaging. Within the 0-255 range of gray levels, colors can be assigned to any single level or group of gray levels. Densitometric values then become easily recognized colors which relate to the image density. Choosing a color to identify any given density allows separation by morphology or composition (form or function). Additionally, relative areas of each color are readily available for determining distribution of that density by comparison with other densities within the image.

  10. Image segmentation with implicit color standardization using spatially constrained expectation maximization: detection of nuclei.

    PubMed

    Monaco, James; Hipp, J; Lucas, D; Smith, S; Balis, U; Madabhushi, Anant

    2012-01-01

    Color nonstandardness--the propensity for similar objects to exhibit different color properties across images--poses a significant problem in the computerized analysis of histopathology. Though many papers propose means for improving color constancy, the vast majority assume image formation via reflective light instead of light transmission as in microscopy, and thus are inappropriate for histological analysis. Previously, we presented a novel Bayesian color segmentation algorithm for histological images that is highly robust to color nonstandardness; this algorithm employed the expectation maximization (EM) algorithm to dynamically estimate for each individual image the probability density functions that describe the colors of salient objects. However, our approach, like most EM-based algorithms, ignored important spatial constraints, such as those modeled by Markov random field (MRFs). Addressing this deficiency, we now present spatially-constrained EM (SCEM), a novel approach for incorporating Markov priors into the EM framework. With respect to our segmentation system, we replace EM with SCEM and then assess its improved ability to segment nuclei in H&E stained histopathology. Segmentation performance is evaluated over seven (nearly) identical sections of gastrointestinal tissue stained using different protocols (simulating severe color nonstandardness). Over this dataset, our system identifies nuclear regions with an area under the receiver operator characteristic curve (AUC) of 0.838. If we disregard spatial constraints, the AUC drops to 0.748.

  11. Dual-tree complex wavelet transform applied on color descriptors for remote-sensed images retrieval

    NASA Astrophysics Data System (ADS)

    Sebai, Houria; Kourgli, Assia; Serir, Amina

    2015-01-01

    This paper highlights color component features that improve high-resolution satellite (HRS) images retrieval. Color component correlation across image lines and columns is used to define a revised color space. It is designed to simultaneously take both color and neighborhood information. From this space, color descriptors, namely rotation invariant uniform local binary pattern, histogram of gradient, and a modified version of local variance are derived through dual-tree complex wavelet transform (DT-CWT). A new color descriptor called smoothed local variance (SLV) using an edge-preserving smoothing filter is introduced. It is intended to offer an efficient way to represent texture/structure information using an invariant to rotation descriptor. This descriptor takes advantage of DT-CWT representation to enhance the retrieval performance of HRS images. We report an evaluation of the SLV descriptor associated with the new color space using different similarity distances in our content-based image retrieval scheme. We also perform comparison with some standard features. Experimental results show that SLV descriptor allied to DT-CWT representation outperforms the other approaches.

  12. Color image detection by biomolecular photoreceptor using bacteriorhodopsin-based complex LB films.

    PubMed

    Choi, H G; Jung, W C; Min, J; Lee, W H; Choi, J W

    2001-12-01

    A biomolecular photoreceptor consisting of bacteriorhodopsin (bR)-based complex Langmuir-Blodgett (LB) films was developed for color image detection. By mimicking the functions of the pigments in retina of human visual system, biomolecules with photoelectric conversion function were chosen and used as constituents for an artificial photoreceptor. bR and flavin were deposited onto the patterned (9-pixelized) ITO glass by LB technique. A 9-pixel biomolecular photoreceptor was fabricated with a sandwich-type structure of ITO/LB films/electrolyte gel/Pt. Since each functional molecule shows its own response characteristic according to the light illumination in the visible region, the simplified knowledge-based algorithm for interpretation of the incident light wavelength (color) was proposed based on the basic rule describing the relationship between the photoelectric response characteristics and the incident light wavelength. When simple color images were projected onto the photoreceptor, the primary colors in visible light region, red, green, and blue were clearly recognized, and the projected color images were fairly well reproduced onto the color monitor by the proposed photoreceptor with the knowledge-based algorithm. It is concluded that the proposed device has a capability of recognizing the color images and can be used as a model system to simulate the information processing function of the human visual system.

  13. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    PubMed

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise. PMID:24663832

  14. Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector.

    PubMed

    Fu, Chen; Arguello, Henry; Sadler, Brian M; Arce, Gonzalo R

    2015-11-01

    A compressive spectral and polarization imager based on a pixelized polarizer and colored patterned detector is presented. The proposed imager captures several dispersed compressive projections with spectral and polarization coding. Stokes parameter images at several wavelengths are reconstructed directly from 2D projections. Employing a pixelized polarizer and colored patterned detector enables compressive sensing over spatial, spectral, and polarization domains, reducing the total number of measurements. Compressive sensing codes are specially designed to enhance the peak signal-to-noise ratio in the reconstructed images. Experiments validate the architecture and reconstruction algorithms.

  15. Spatial distribution of jovian clouds, hazes and colors from Cassini ISS multi-spectral images

    NASA Astrophysics Data System (ADS)

    Ordonez-Etxeberria, I.; Hueso, R.; Sánchez-Lavega, A.; Pérez-Hoyos, S.

    2016-03-01

    The Cassini spacecraft made a gravity assist flyby of Jupiter in December 2000. The Imaging Science Subsystem (ISS) acquired images of the planet that covered the visual range with filters sensitive to the distribution of clouds and hazes, their altitudes and color. We use a selection of these images to build high-resolution cylindrical maps of the planet in 9 wavelengths. We explore the spatial distribution of the planet reflectivity examining the distribution of color and altitudes of hazes as well as their relation. A variety of analyses is presented: (a) Principal Component Analysis (PCA); (b) color-altitude indices; and (c) chromaticity diagrams (for a quantitative characterization of Jupiter "true" colors as they would be perceived by a human observer). PCA of the full dataset indicates that six components are required to explain the data. These components are likely related to the distribution of cloud opacity at the main cloud, the distribution of two types of hazes, two chromophores or coloring processes and the distribution of convective storms. While the distribution of a single chromophore can explain most of the color variations in the atmosphere, a second coloring agent is required to explain the brownish cyclones in the North Equatorial Belt (NEB). This second colorant could be caused by a different chromophore or by the same chromophore located in structures deeper in the atmosphere. Color indices separate different dynamical regions where cloud color and altitude are correlated from those where they are not. The Great Red Spot (GRS) appears as a well separated region in terms of its position in a global color-altitude scatter diagram and different families of vortices are examined, including the red cyclones which are located deeper in the atmosphere. Finally, a chromaticity diagram of Jupiter nearly true color images quantifies the color variations in Jupiter's clouds from the perspective of a visual observer and helps to quantify how different

  16. A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping

    NASA Astrophysics Data System (ADS)

    Saad, Ashraf A.; Shapiro, Linda G.

    2008-03-01

    Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.

  17. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  18. Adaptive filtering image preprocessing for smart FPA technology

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1995-05-01

    This paper discusses two applications of adaptive filters for image processing on parallel architectures. The first, based on