Science.gov

Sample records for adaptive color image

  1. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  2. Colored adaptive compressed imaging with a single photodiode.

    PubMed

    Yan, Yiyun; Dai, Huidong; Liu, Xingjiong; He, Weiji; Chen, Qian; Gu, Guohua

    2016-05-10

    Computational ghost imaging is commonly used to reconstruct grayscale images. Currently, however, there is little research aimed at reconstructing color images. In this paper, we theoretically and experimentally demonstrate a colored adaptive compressed imaging method. Benefiting from imaging in YUV color space, the proposed method adequately exploits the sparsity of the U, V components in the wavelet domain, the interdependence between luminance and chrominance, and human visual characteristics. The simulation and experimental results show that our method greatly reduces the measurements required and offers better image quality compared to recovering the red (R), green (G), and blue (B) components separately in RGB color space. As the application of a single photodiode increases, our method shows great potential in many fields. PMID:27168280

  3. Adaptation and the color statistics of natural images.

    PubMed

    Webster, M A; Mollon, J D

    1997-12-01

    Color perception depends profoundly on adaptation processes that adjust sensitivity in response to the prevailing pattern of stimulation. We examined how color sensitivity and appearance might be influenced by adaptation to the color distributions characteristic of natural images. Color distributions were measured for natural scenes by sampling an array of locations within each scene with a spectroradiometer, or by recording each scene with a digital camera successively through 31 interference filters. The images were used to reconstruct the L, M and S cone excitation at each spatial location, and the contrasts along three post-receptoral axes [L + M, L - M or S - (L + M)]. Individual scenes varied substantially in their mean chromaticity and luminance, in the principal color-luminance axes of their distributions, and in the range of contrasts in their distributions. Chromatic contrasts were biased along a relatively narrow range of bluish to yellowish-green angles, lying roughly between the S - (L + M) axis (which was more characteristic of scenes with lush vegetation and little sky) and a unique blue-yellow axis (which was more typical of arid scenes). For many scenes L - M and S - (L + M) signals were highly correlated, with weaker correlations between luminance and chromaticity. We use a two-stage model (von Kries scaling followed by decorrelation) to show how the appearance of colors may be altered by light adaptation to the mean of the distributions and by contrast adaptation to the contrast range and principal axes of the distributions; and we show that such adjustments are qualitatively consistent with empirical measurements of asymmetric color matches obtained after adaptation to successive random samples drawn from natural distributions of chromaticities and lightnesses. Such adaptation effects define the natural range of operating states of the visual system. PMID:9425544

  4. Color Enhancement in Endoscopic Images Using Adaptive Sigmoid Function and Space Variant Color Reproduction.

    PubMed

    Imtiaz, Mohammad S; Wahid, Khan A

    2015-01-01

    Modern endoscopes play an important role in diagnosing various gastrointestinal (GI) tract related diseases. The improved visual quality of endoscopic images can provide better diagnosis. This paper presents an efficient color image enhancement method for endoscopic images. It is achieved in two stages: image enhancement at gray level followed by space variant chrominance mapping color reproduction. Image enhancement is achieved by performing adaptive sigmoid function and uniform distribution of sigmoid pixels. Secondly, a space variant chrominance mapping color reproduction is used to generate new chrominance components. The proposed method is used on low contrast color white light images (WLI) to enhance and highlight the vascular and mucosa structures of the GI tract. The method is also used to colorize grayscale narrow band images (NBI) and video frames. The focus value and color enhancement factor show that the enhancement level in the processed image is greatly increased compared to the original endoscopic image. The overall contrast level of the processed image is higher than the original image. The color similarity test has proved that the proposed method does not add any additional color which is not present in the original image. The algorithm has low complexity with an execution speed faster than other related methods. PMID:26089969

  5. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  6. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data. PMID:23938797

  7. An Efficient and Self-Adapted Approach to the Sharpening of Color Images

    PubMed Central

    Lee, Tien-Lin

    2013-01-01

    An efficient approach to the sharpening of color images is proposed in this paper. For this, the image to be sharpened is first transformed to the HSV color model, and then only the channel of Value will be used for the process of sharpening while the other channels are left unchanged. We then apply a proposed edge detector and low-pass filter to the channel of Value to pick out pixels around boundaries. After that, those pixels detected as around edges or boundaries are adjusted so that the boundary can be sharpened, and those nonedge pixels are kept unaltered. The increment or decrement magnitude that is to be added to those edge pixels is determined in an adaptive manner based on global statistics of the image and local statistics of the pixel to be sharpened. With the proposed approach, the discontinuities can be highlighted while most of the original information contained in the image can be retained. Finally, the adjusted channel of Value and that of Hue and Saturation will be integrated to get the sharpened color image. Extensive experiments on natural images will be given in this paper to highlight the effectiveness and efficiency of the proposed approach. PMID:24348136

  8. Adaptive Color Constancy Using Faces.

    PubMed

    Bianco, Simone; Schettini, Raimondo

    2014-08-01

    In this work we design an adaptive color constancy algorithm that, exploiting the skin regions found in faces, is able to estimate and correct the scene illumination. The algorithm automatically switches from global to spatially varying color correction on the basis of the illuminant estimations on the different faces detected in the image. An extensive comparison with both global and local color constancy algorithms is carried out to validate the effectiveness of the proposed algorithm in terms of both statistical and perceptual significance on a large heterogeneous data set of RAW images containing faces. PMID:26353334

  9. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue

  10. Adaptive optics retinal imaging reveals S-cone dystrophy in tritan color-vision deficiency

    NASA Astrophysics Data System (ADS)

    Baraas, Rigmor C.; Carroll, Joseph; Gunther, Karen L.; Chung, Mina; Williams, David R.; Foster, David H.; Neitz, Maureen

    2007-05-01

    Tritan color-vision deficiency is an autosomal dominant disorder associated with mutations in the short-wavelength-sensitive- (S-) cone-pigment gene. An unexplained feature of the disorder is that individuals with the same mutation manifest different degrees of deficiency. To date, it has not been possible to examine whether any loss of S-cone function is accompanied by physical disruption in the cone mosaic. Two related tritan subjects with the same novel mutation in their S-cone-opsin gene, but different degrees of deficiency, were examined. Adaptive optics was used to obtain high-resolution retinal images, which revealed distinctly different S-cone mosaics consistent with their discrepant phenotypes. In addition, a significant disruption in the regularity of the overall cone mosaic was observed in the subject completely lacking S-cone function. These results taken together with other recent findings from molecular genetics indicate that, with rare exceptions, tritan deficiency is progressive in nature.

  11. Adaptive clutter filter in 2-D color flow imaging based on in vivo I/Q signal.

    PubMed

    Zhou, Xiaoming; Zhang, Congyao; Liu, Dong C

    2014-01-01

    Color flow imaging has been well applied in clinical diagnosis. For the high quality color flow images, clutter filter is important to separate the Doppler signals from blood and tissue. Traditional clutter filters, such as finite impulse response, infinite impulse response and regression filters, were applied, which are based on the hypothesis that the clutter signal is stationary or tissue moves slowly. However, in realistic clinic color flow imaging, the signals are non-stationary signals because of accelerated moving tissue. For most related papers, simulated RF signals are widely used without in vivo I/Q signal. Hence, in this paper, adaptive polynomial regression filter, which is down mixing with instantaneous clutter frequency, was proposed based on in vivo carotid I/Q signal in realistic color flow imaging. To get the best performance, the optimal polynomial order of polynomial regression filter and the optimal polynomial order for estimation of instantaneous clutter frequency respectively were confirmed. Finally, compared with the mean blood velocity and quality of 2-D color flow image, the experiment results show that adaptive polynomial regression filter, which is down mixing with instantaneous clutter frequency, can significantly enhance the mean blood velocity and get high quality 2-D color flow image. PMID:24211911

  12. Computationally Efficient Locally Adaptive Demosaicing of Color Filter Array Images Using the Dual-Tree Complex Wavelet Packet Transform

    PubMed Central

    Aelterman, Jan; Goossens, Bart; De Vylder, Jonas; Pižurica, Aleksandra; Philips, Wilfried

    2013-01-01

    Most digital cameras use an array of alternating color filters to capture the varied colors in a scene with a single sensor chip. Reconstruction of a full color image from such a color mosaic is what constitutes demosaicing. In this paper, a technique is proposed that performs this demosaicing in a way that incurs a very low computational cost. This is done through a (dual-tree complex) wavelet interpretation of the demosaicing problem. By using a novel locally adaptive approach for demosaicing (complex) wavelet coefficients, we show that many of the common demosaicing artifacts can be avoided in an efficient way. Results demonstrate that the proposed method is competitive with respect to the current state of the art, but incurs a lower computational cost. The wavelet approach also allows for computationally effective denoising or deblurring approaches. PMID:23671575

  13. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  14. Visual color image processing

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Schaefer, Gerald

    1999-12-01

    In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.

  15. Color demosaicking via robust adaptive sparse representation

    NASA Astrophysics Data System (ADS)

    Huang, Lili; Xiao, Liang; Chen, Qinghua; Wang, Kai

    2015-09-01

    A single sensor camera can capture scenes by means of a color filter array. Each pixel samples only one of the three primary colors. We use a color demosaicking (CDM) technique to produce full color images and propose a robust adaptive sparse representation model for high quality CDM. The data fidelity term is characterized by l1 norm to suppress the heavy-tailed visual artifacts with an adaptively learned dictionary, while the regularization term is encouraged to seek sparsity by forcing sparse coding close to its nonlocal means to reduce coding errors. Based on the classical quadratic penalty function technique in optimization and an operator splitting method in convex analysis, we further present an effective iterative algorithm to solve the variational problem. The efficiency of the proposed method is demonstrated by experimental results with simulated and real camera data.

  16. Color harmonization for images

    NASA Astrophysics Data System (ADS)

    Tang, Zhen; Miao, Zhenjiang; Wan, Yanli; Wang, Zhifei

    2011-04-01

    Color harmonization is an artistic technique to adjust a set of colors in order to enhance their visual harmony so that they are aesthetically pleasing in terms of human visual perception. We present a new color harmonization method that treats the harmonization as a function optimization. For a given image, we derive a cost function based on the observation that pixels in a small window that have similar unharmonic hues should be harmonized with similar harmonic hues. By minimizing the cost function, we get a harmonized image in which the spatial coherence is preserved. A new matching function is proposed to select the best matching harmonic schemes, and a new component-based preharmonization strategy is proposed to preserve the hue distribution of the harmonized images. Our approach overcomes several shortcomings of the existing color harmonization methods. We test our algorithm with a variety of images to demonstrate the effectiveness of our approach.

  17. Color Doppler flow imaging.

    PubMed

    Foley, W D; Erickson, S J

    1991-01-01

    The performance requirements and operational parameters of a color Doppler system are outlined. The ability of an operator to recognize normal and abnormal variations in physiologic flow and artifacts caused by noise and aliasing is emphasized. The use of color Doppler flow imaging is described for the vessels of the neck and extremities, upper abdomen and abdominal transplants, obstetrics and gynecology, dialysis fistulas, and testicular and penile flow imaging. PMID:1898567

  18. Color image segmentation

    NASA Astrophysics Data System (ADS)

    McCrae, Kimberley A.; Ruck, Dennis W.; Rogers, Steven K.; Oxley, Mark E.

    1994-03-01

    The most difficult stage of automated target recognition is segmentation. Current segmentation problems include faces and tactical targets; previous efforts to segment these objects have used intensity and motion cues. This paper develops a color preprocessing scheme to be used with the other segmentation techniques. A neural network is trained to identify the color of a desired object, eliminating all but that color from the scene. Gabor correlations and 2D wavelet transformations will be performed on stationary images; and 3D wavelet transforms on multispectral data will incorporate color and motion detection into the machine visual system. The paper will demonstrate that color and motion cues can enhance a computer segmentation system. Results from segmenting faces both from the AFIT data base and from video taped television are presented; results from tactical targets such as tanks and airplanes are also given. Color preprocessing is shown to greatly improve the segmentation in most cases.

  19. Adaptive characterization method for desktop color printers

    NASA Astrophysics Data System (ADS)

    Shen, Hui-Liang; Zheng, Zhi-Huan; Jin, Chong-Chao; Du, Xin; Shao, Si-Jie; Xin, John H.

    2013-04-01

    With the rapid development of multispectral imaging technique, it is desired that the spectral color can be accurately reproduced using desktop color printers. However, due to the specific spectral gamuts determined by printer inks, it is almost impossible to exactly replicate the reflectance spectra in other media. In addition, as ink densities can not be individually controlled, desktop printers can only be regarded as red-green-blue devices, making physical models unfeasible. We propose a locally adaptive method, which consists of both forward and inverse models, for desktop printer characterization. In the forward model, we establish the adaptive transform between control values and reflectance spectrum on individual cellular subsets by using weighted polynomial regression. In the inverse model, we first determine the candidate space of the control values based on global inverse regression and then compute the optimal control values by minimizing the color difference between the actual spectrum and the predicted spectrum via forward transform. Experimental results show that the proposed method can reproduce colors accurately for different media under multiple illuminants.

  20. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  1. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  2. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  3. Color image segmentation considering human sensitivity for color pattern variations

    NASA Astrophysics Data System (ADS)

    Yoon, Kuk-Jin; Kweon, In-So

    2001-10-01

    Color image segmentation plays an important role in the computer vision and image processing area. In this paper, we propose a novel color image segmentation algorithm in consideration of human visual sensitivity for color pattern variations by generalizing K-means clustering. Human visual system has different color perception sensitivity according to the spatial color pattern variation. To reflect this effect, we define the CCM (Color Complexity Measure) by calculating the absolute deviation with Gaussian weighting within the local mask and assign weight value to each color vector using the CCM values.

  4. Image subregion querying using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2002-01-01

    A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.

  5. Transfer color to night vision images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Jing, Zhongliang; Liu, Gang; Li, Zhenhua

    2005-08-01

    Natural color appearance is the key problem of color night vision field. In this paper, the color mood of daytime color image is transferred to the monochromic night vision image. This method gives the night image a natural color appearance. For each pixel in the night vision image, the best matching pixel in the color image is found based on texture similarity measure. Entropy, energy, contrast, homogeneity, and correlation features based on co-occurrence matrix are combined as texture similarity measure to find the corresponding pixels between the two images. We use a genetic algorithm (GA) to find the optimistic weighting factors assigned to the five different features. GA is also employed in searching the matching pixels to make the color transfer algorithm faster. When the best matching pixel in the color image is found, the chromaticity values are transferred to the corresponding pixel of the night vision image. The experiment results demonstrate the efficiency of this natural color transfer technique.

  6. Snapshot colored compressive spectral imager.

    PubMed

    Correa, Claudia V; Arguello, Henry; Arce, Gonzalo R

    2015-10-01

    Traditional spectral imaging approaches require sensing all the voxels of a scene. Colored mosaic FPA detector-based architectures can acquire sets of the scene's spectral components, but the number of spectral planes depends directly on the number of available filters used on the FPA, which leads to reduced spatiospectral resolutions. Instead of sensing all the voxels of the scene, compressive spectral imaging (CSI) captures coded and dispersed projections of the spatiospectral source. This approach mitigates the resolution issues by exploiting optical phenomena in lenses and other elements, which, in turn, compromise the portability of the devices. This paper presents a compact snapshot colored compressive spectral imager (SCCSI) that exploits the benefits of the colored mosaic FPA detectors and the compression capabilities of CSI sensing techniques. The proposed optical architecture has no moving parts and can capture the spatiospectral information of a scene in a single snapshot by using a dispersive element and a color-patterned detector. The optical and the mathematical models of SCCSI are presented along with a testbed implementation of the system. Simulations and real experiments show the accuracy of SCCSI and compare the reconstructions with those of similar CSI optical architectures, such as the CASSI and SSCSI systems, resulting in improvements of up to 6 dB and 1 dB of PSNR, respectively. PMID:26479928

  7. Color (RGB) imaging laser radar

    NASA Astrophysics Data System (ADS)

    Ferri De Collibus, M.; Bartolini, L.; Fornetti, G.; Francucci, M.; Guarneri, M.; Nuvoli, M.; Paglia, E.; Ricci, R.

    2008-03-01

    We present a new color (RGB) imaging 3D laser scanner prototype recently developed in ENEA, Italy). The sensor is based on AM range finding technique and uses three distinct beams (650nm, 532nm and 450nm respectively) in monostatic configuration. During a scan the laser beams are simultaneously swept over the target, yielding range and three separated channels (R, G and B) of reflectance information for each sampled point. This information, organized in range and reflectance images, is then elaborated to produce very high definition color pictures and faithful, natively colored 3D models. Notable characteristics of the system are the absence of shadows in the acquired reflectance images - due to the system's monostatic setup and intrinsic self-illumination capability - and high noise rejection, achieved by using a narrow field of view and interferential filters. The system is also very accurate in range determination (accuracy better than 10 -4) at distances up to several meters. These unprecedented features make the system particularly suited to applications in the domain of cultural heritage preservation, where it could be used by conservators for examining in detail the status of degradation of frescoed walls, monuments and paintings, even at several meters of distance and in hardly accessible locations. After providing some theoretical background, we describe the general architecture and operation modes of the color 3D laser scanner, by reporting and discussing first experimental results and comparing high-definition color images produced by the instrument with photographs of the same subjects taken with a Nikon D70 digital camera.

  8. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  9. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  10. Color image simulation for underwater optics.

    PubMed

    Boffety, Matthieu; Galland, Frédéric; Allais, Anne-Gaëlle

    2012-08-10

    Underwater optical image simulation is a valuable tool for oceanic science, especially for the characterization of image processing techniques such as color restoration. In this context, simulating images with a correct color rendering is crucial. This paper presents an extension of existing image simulation models to RGB imaging. The influence of the spectral discretization of the model parameters on the color rendering of the simulated images is studied. It is especially shown that, if only RGB data of the scene chosen for simulations are available, a spectral reconstruction step prior to the simulations improves the image color rendering. PMID:22885575

  11. Appearance can be deceiving: using appearance models in color imaging

    NASA Astrophysics Data System (ADS)

    Johnson, Garrett M.

    2007-01-01

    As color imaging has evolved through the years, our toolset for understanding has similarly evolved. Research in color difference equations and uniform color spaces spawned tools such as CIELAB, which has had tremendous success over the years. Research on chromatic adaptation and other appearance phenomena then extended CIELAB to form the basis of color appearance models, such as CIECAM02. Color difference equations such as CIEDE2000 evolved to reconcile weaknesses in areas of the CIELAB space. Similarly, models such as S-CIELAB were developed to predict more spatially complex color difference calculations between images. Research in all of these fields is still going strong and there seems to be a trend towards unification of some of the tools, such as calculating color differences in a color appearance space. Along such lines, image appearance models have been developed that attempt to combine all of the above models and metric into one common framework. The goal is to allow the color imaging research to pick and choose the appropriate modeling toolset for their needs. Along these lines, the iCAM image appearance model framework was developed to study a variety of color imaging problems. These include image difference and image quality evaluations as well gamut mapping and high-dynamic range (HDR) rendering. It is important to stress that iCAM was not designed to be a complete color imaging solution, but rather a starting point for unifying models of color appearance, color difference, and spatial vision. As such the choice of model components is highly dependent on the problem being addressed. For example, with CIELAB it clearly evident that it is not necessary to use the associated color difference equations to have great success as a deviceindependent color space. Likewise, it may not be necessary to use the spatial filtering components of an image appearance model when performing image rendering. This paper attempts to shed some light on some of the

  12. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  13. Color space selection for JPEG image compression

    NASA Astrophysics Data System (ADS)

    Moroney, Nathan; Fairchild, Mark D.

    1995-10-01

    The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, XYZ, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psychophysical techniques were used to assess the results. The final results indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.

  14. Image color reduction method for color-defective observers using a color palette composed of 20 particular colors

    NASA Astrophysics Data System (ADS)

    Sakamoto, Takashi

    2015-01-01

    This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.

  15. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  16. Epistatic Adaptive Evolution of Human Color Vision

    PubMed Central

    Yokoyama, Shozo; Xing, Jinyi; Liu, Yang; Faggionato, Davide; Altun, Ahmet; Starmer, William T.

    2014-01-01

    Establishing genotype-phenotype relationship is the key to understand the molecular mechanism of phenotypic adaptation. This initial step may be untangled by analyzing appropriate ancestral molecules, but it is a daunting task to recapitulate the evolution of non-additive (epistatic) interactions of amino acids and function of a protein separately. To adapt to the ultraviolet (UV)-free retinal environment, the short wavelength-sensitive (SWS1) visual pigment in human (human S1) switched from detecting UV to absorbing blue light during the last 90 million years. Mutagenesis experiments of the UV-sensitive pigment in the Boreoeutherian ancestor show that the blue-sensitivity was achieved by seven mutations. The experimental and quantum chemical analyses show that 4,008 of all 5,040 possible evolutionary trajectories are terminated prematurely by containing a dehydrated nonfunctional pigment. Phylogenetic analysis further suggests that human ancestors achieved the blue-sensitivity gradually and almost exclusively by epistasis. When the final stage of spectral tuning of human S1 was underway 45–30 million years ago, the middle and long wavelength-sensitive (MWS/LWS) pigments appeared and so-called trichromatic color vision was established by interprotein epistasis. The adaptive evolution of human S1 differs dramatically from orthologous pigments with a major mutational effect used in achieving blue-sensitivity in a fish and several mammalian species and in regaining UV vision in birds. These observations imply that the mechanisms of epistatic interactions must be understood by studying various orthologues in different species that have adapted to various ecological and physiological environments. PMID:25522367

  17. Low color distortion adaptive dimming scheme for power efficient LCDs

    NASA Astrophysics Data System (ADS)

    Nam, Hyoungsik; Song, Eun-Ji

    2013-06-01

    This paper demonstrates the color compensation algorithm to reduce the color distortion caused by mismatches between the reference gamma value of a dimming algorithm and the display gamma values of an LCD panel in a low power adaptive dimming scheme. In 2010, we presented the YrYgYb algorithm, which used the display gamma values extracted from the luminance data of red, green, and blue sub-pixels, Yr, Yg, and Yb, with the simulation results. It was based on the ideal panel model where the color coordinates were maintained at the fixed values over the gray levels. Whereas, this work introduces an XrYgZb color compensation algorithm which obtains the display gamma values of red, green, and blue from the different tri-stimulus data of Xr, Yg, and Zb, to obtain further reduction on the color distortion. Both simulation and measurement results ensure that a XrYgZb algorithm outperforms a previous YrYgYb algorithm. In simulation which has been conducted at the practical model derived from the measured data, the XrYgZb scheme achieves lower maximum and average color difference values of 3.7743 and 0.6230 over 24 test picture images, compared to 4.864 and 0.7156 in the YrYgYb one. In measurement of a 19-inch LCD panel, the XrYgZb method also accomplishes smaller color difference values of 1.444072 and 5.588195 over 49 combinations of red, green, and blue data, compared to 1.50578 and 6.00403 of the YrYgYb at the backlight dimming ratios of 0.85 and 0.4.

  18. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  19. Adaptive synchronization and pinning control of colored networks

    NASA Astrophysics Data System (ADS)

    Wu, Zhaoyan; Xu, Xin-Jian; Chen, Guanrong; Fu, Xinchu

    2012-12-01

    A colored network model, corresponding to a colored graph in mathematics, is used for describing the complexity of some inter-connected physical systems. A colored network is consisted of colored nodes and edges. Colored nodes may have identical or nonidentical local dynamics. Colored edges between any pair of nodes denote not only the outer coupling topology but also the inner interactions. In this paper, first, synchronization of edge-colored networks is studied from adaptive control and pinning control approaches. Then, synchronization of general colored networks is considered. To achieve synchronization of a colored network to an arbitrarily given orbit, open-loop control, pinning control and adaptive coupling strength methods are proposed and tested, with some synchronization criteria derived. Finally, numerical examples are given to illustrate theoretical results.

  20. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  1. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  2. Adaptive prediction trees for image compression.

    PubMed

    Robinson, John A

    2006-08-01

    This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods. PMID:16900671

  3. Digital image colorization based on distance transformation

    NASA Astrophysics Data System (ADS)

    Lagodzinski, Przemyslaw; Smolka, Bogdan

    2008-01-01

    Colorization is a term introduced by W. Markle1 to describe a computerized process for adding color to black and white pictures, movies or TV programs. The task involves replacing a scalar value stored at each pixel of the gray scale image by a vector in a three dimensional color space with luminance, saturation and hue or simply RGB. Since different colors may carry the same luminance value but vary in hue and/or saturation, the problem of colorization has no inherently "correct" solution. Due to these ambiguities, human interaction usually plays a large role. In this paper we present a novel colorization method that takes advantage of the morphological distance transformation, changes of neighboring pixel intensities and gradients to propagate the color within the gray scale image. The proposed method frees the user of segmenting the image, as color is provided simply by scribbles which are next automatically propagated within the image. The effectiveness of the algorithm allows the user to work interactively and to obtain the desired results promptly after providing the color scribbles. In the paper we show that the proposed method allows for high quality colorization results for still images.

  4. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score. PMID:26513783

  5. Color filter array demosaicing: an adaptive progressive interpolation based on the edge type

    NASA Astrophysics Data System (ADS)

    Dong, Qiqi; Liu, Zhaohui

    2015-10-01

    Color filter array (CFA) is one of the key points for single-sensor digital cameras to produce color images. Bayer CFA is the most commonly used pattern. In this array structure, the sampling frequency of green is two times of red or blue, which is consistent with the sensitivity of human eyes to colors. However, each sensor pixel only samples one of three primary color values. To render a full-color image, an interpolation process, commonly referred to CFA demosaicing, is required to estimate the other two missing color values at each pixel. In this paper, we explore an adaptive progressive interpolation based on the edge type algorithm. The proposed demosaicing method consists of two successive steps: an interpolation step that estimates missing color values according to various edges and a post-processing step by iterative interpolation.

  6. Variational exemplar-based image colorization.

    PubMed

    Bugeau, Aurélie; Ta, Vinh-Thong; Papadakis, Nicolas

    2014-01-01

    In this paper, we address the problem of recovering a color image from a grayscale one. The input color data comes from a source image considered as a reference image. Reconstructing the missing color of a grayscale pixel is here viewed as the problem of automatically selecting the best color among a set of color candidates while simultaneously ensuring the local spatial coherency of the reconstructed color information. To solve this problem, we propose a variational approach where a specific energy is designed to model the color selection and the spatial constraint problems simultaneously. The contributions of this paper are twofold. First, we introduce a variational formulation modeling the color selection problem under spatial constraints and propose a minimization scheme, which computes a local minima of the defined nonconvex energy. Second, we combine different patch-based features and distances in order to construct a consistent set of possible color candidates. This set is used as input data and our energy minimization automatically selectsthe best color to transfer for each pixel of the grayscale image. Finally, the experiments illustrate the potentiality of our simple methodology and show that our results are very competitive with respect to the state-of-the-art methods. PMID:24235307

  7. Image-based color ink diffusion rendering.

    PubMed

    Wang, Chung-Ming; Wang, Ren-Jie

    2007-01-01

    This paper proposes an image-based painterly rendering algorithm for automatically synthesizing an image with color ink diffusion. We suggest a mathematical model with a physical base to simulate the phenomenon of color colloidal ink diffusing into absorbent paper. Our algorithm contains three main parts: a feature extraction phase, a Kubelka-Munk (KM) color mixing phase, and a color ink diffusion synthesis phase. In the feature extraction phase, the information of the reference image is simplified by luminance division and color segmentation. In the color mixing phase, the KM theory is employed to approximate the result when one pigment is painted upon another pigment layer. Then, in the color ink diffusion synthesis phase, the physically-based model that we propose is employed to simulate the result of color ink diffusion in absorbent paper using a texture synthesis technique. Our image-based ink diffusing rendering (IBCIDR) algorithm eliminates the drawback of conventional Chinese ink simulations, which are limited to the black ink domain, and our approach demonstrates that, without using any strokes, a color image can be automatically converted to the diffused ink style with a visually pleasing appearance. PMID:17218741

  8. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  9. Adaptive color rendering of maps for users with color vision deficiencies

    NASA Astrophysics Data System (ADS)

    Kvitle, Anne Kristin; Green, Phil; Nussbaum, Peter

    2015-01-01

    A map is an information design object for which canonical colors for the most common elements are well established. For a CVD observer, it may be difficult to discriminate between such elements - for example, it may be hard to distinguish a red road from a green landscape on the basis of color alone. We address this problem through an adaptive color schema in which the conspicuity of elements in a map to the individual user is maximized. This paper outlines a method to perform adaptive color rendering of map information for users with color vision deficiencies. The palette selection method is based on a pseudo-color palette generation technique which constrains colors to those which lie on the boundary of a reference object color gamut. A user performs a color vision discrimination task, and based on the results of the test, a palette of colors is selected using the pseudo-color palette generation method. This ensures that the perceived difference between palette elements is high but which retains the canonical color of well-known elements as far as possible. We show examples of color palettes computed for a selection of normal and CVD observers, together with maps rendered using these palettes.

  10. Improving dermoscopy image classification using color constancy.

    PubMed

    Barata, Catarina; Celebi, M Emre; Marques, Jorge S

    2015-05-01

    Robustness is one of the most important characteristics of computer-aided diagnosis systems designed for dermoscopy images. However, it is difficult to ensure this characteristic if the systems operate with multisource images acquired under different setups. Changes in the illumination and acquisition devices alter the color of images and often reduce the performance of the systems. Thus, it is important to normalize the colors of dermoscopy images before training and testing any system. In this paper, we investigate four color constancy algorithms: Gray World, max-RGB, Shades of Gray, and General Gray World. Our results show that color constancy improves the classification of multisource images, increasing the sensitivity of a bag-of-features system from 71.0% to 79.7% and the specificity from 55.2% to 76% using only 1-D RGB histograms as features. PMID:25073179

  11. Statistical pressure snakes based on color images.

    SciTech Connect

    Schaub, Hanspeter

    2004-05-01

    The traditional mono-color statistical pressure snake was modified to function on a color image with target errors defined in HSV color space. Large variations in target lighting and shading are permitted if the target color is only specified in terms of hue. This method works well with custom targets where the target is surrounded by a color of a very different hue. A significant robustness increase is achieved in the computer vision capability to track a specific target in an unstructured, outdoor environment. By specifying the target color to contain hue, saturation and intensity values, it is possible to establish a reasonably robust method to track general image features of a single color. This method is convenient to allow the operator to select arbitrary targets, or sections of a target, which have a common color. Further, a modification to the standard pixel averaging routine is introduced which allows the target to be specified not only in terms of a single color, but also using a list of colors. These algorithms were tested and verified by using a web camera attached to a personal computer.

  12. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  13. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural

  14. Color image enhancement based on HVS and MSRCR

    NASA Astrophysics Data System (ADS)

    Xue, Rong kun; Li, Yu feng

    2015-10-01

    Due to inclement weather caused frequently, such as clouds, fog , rain etc. The light intensity on the illuminated objects falls sharply, it make the scenes captured unclear, poor visual quality and low contrast degree. To improve the overall quality of these images, especially the bad illuminated images, the paper propose a new color image enhancement algorithm which is based on multi-scale Retinex theory with color recovering factor (MSRCR) and the human visual system (HVS). It can effectively solve the problem of the color balance of digital images by removing the influence of light and obtain component images reflected the reflex of the object surface, meanwhile, reduce the impact of non-artificial factors and overcome the Ringing effect and human interference. Through comparison and contrast among experiments, that combined evaluated parameters on enhancement image, such as variance, average gradient, sharpness and so forth with the traditional image enhancement methods, such as histogram enhancement, adaptive histogram enhancement, the MSRCR algorithm is proved to be effective in image contrast, detail enhancement and color fidelity, etc.

  15. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  16. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  17. Color standardization in whole slide imaging using a color calibration slide

    PubMed Central

    Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako

    2014-01-01

    Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739

  18. How Phoenix Creates Color Images (Animation)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This simple animation shows how a color image is made from images taken by Phoenix.

    The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists.

    By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Beyond Color Difference: Residual Interpolation for Color Image Demosaicking.

    PubMed

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2016-03-01

    In this paper, we propose residual interpolation (RI) as an alternative to color difference interpolation, which is a widely accepted technique for color image demosaicking. Our proposed RI performs the interpolation in a residual domain, where the residuals are differences between observed and tentatively estimated pixel values. Our hypothesis for the RI is that if image interpolation is performed in a domain with a smaller Laplacian energy, its accuracy is improved. Based on the hypothesis, we estimate the tentative pixel values to minimize the Laplacian energy of the residuals. We incorporate the RI into the gradient-based threshold free algorithm, which is one of the state-of-the-art Bayer demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm using the RI surpasses the state-of-the-art algorithms for the Kodak, the IMAX, and the beyond Kodak data sets. PMID:26780794

  20. Color image attribute and quality measurements

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Panetta, Karen; Agaian, Sos

    2014-05-01

    Color image quality measures have been used for many computer vision tasks. In practical applications, the no-reference (NR) measures are desirable because reference images are not always accessible. However, only limited success has been achieved. Most existing NR quality assessments require that the types of image distortion is known a-priori. In this paper, three NR color image attributes: colorfulness, sharpness and contrast are quantified by new metrics. Using these metrics, a new Color Quality Measure (CQM), which is based on the linear combination of these three color image attributes, is presented. We evaluated the performance of several state-of-the-art no-reference measures for comparison purposes. Experimental results demonstrate the CQM correlates well with evaluations obtained from human observers and it operates in real time. The results also show that the presented CQM outperforms previous works with respect to ranking image quality among images containing the same or different contents. Finally, the performance of CQM is independent of distortion types, which is demonstrated in the experimental results.

  1. Outstanding-objects-oriented color image segmentation using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Hayasaka, Rina; Zhao, Jiying; Matsushita, Yutaka

    1997-10-01

    This paper presents a novel fuzzy-logic-based color image segmentation scheme focusing on outstanding objects to human eyes. The scheme first segments the image into rough fuzzy regions, chooses visually significant regions, and conducts fine segmentation on the chosen regions. It can not only reduce the computational load, but also make contour detection easy because the brief object externals has been previously determined. The scheme reflects human sense, and it can be sued efficiently in automatic extraction of image retrieval key, robot vision and region-adaptive image compression.

  2. Habitual wearers of colored lenses adapt more rapidly to the color changes the lenses produce.

    PubMed

    Engel, Stephen A; Wilkins, Arnold J; Mand, Shivraj; Helwig, Nathaniel E; Allen, Peter M

    2016-08-01

    The visual system continuously adapts to the environment, allowing it to perform optimally in a changing visual world. One large change occurs every time one takes off or puts on a pair of spectacles. It would be advantageous for the visual system to learn to adapt particularly rapidly to such large, commonly occurring events, but whether it can do so remains unknown. Here, we tested whether people who routinely wear spectacles with colored lenses increase how rapidly they adapt to the color shifts their lenses produce. Adaptation to a global color shift causes the appearance of a test color to change. We measured changes in the color that appeared "unique yellow", that is neither reddish nor greenish, as subjects donned and removed their spectacles. Nine habitual wearers and nine age-matched control subjects judged the color of a small monochromatic test light presented with a large, uniform, whitish surround every 5s. Red lenses shifted unique yellow to more reddish colors (longer wavelengths), and greenish lenses shifted it to more greenish colors (shorter wavelengths), consistent with adaptation "normalizing" the appearance of the world. In controls, the time course of this adaptation contained a large, rapid component and a smaller gradual one, in agreement with prior results. Critically, in habitual wearers the rapid component was significantly larger, and the gradual component significantly smaller than in controls. The total amount of adaptation was also larger in habitual wearers than in controls. These data suggest strongly that the visual system adapts with increasing rapidity and strength as environments are encountered repeatedly over time. An additional unexpected finding was that baseline unique yellow shifted in a direction opposite to that produced by the habitually worn lenses. Overall, our results represent one of the first formal reports that adjusting to putting on or taking off spectacles becomes easier over time, and may have important

  3. Adaptive wiener image restoration kernel

    DOEpatents

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  4. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation

    PubMed Central

    Hillis, James M.; Brainard, David H.

    2009-01-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner’s hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation. PMID:17621318

  5. Color image fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Toet, Alexander

    2003-09-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.

  6. Color structured light imaging of skin

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin; Reichenberg, Jason; Sacks, Michael; Tunnell, James W.

    2016-05-01

    We illustrate wide-field imaging of skin using a structured light (SL) approach that highlights the contrast from superficial tissue scattering. Setting the spatial frequency of the SL in a regime that limits the penetration depth effectively gates the image for photons that originate from the skin surface. Further, rendering the SL images in a color format provides an intuitive format for viewing skin pathologies. We demonstrate this approach in skin pathologies using a custom-built handheld SL imaging system.

  7. The influence of contrast adaptation on color appearance.

    PubMed

    Webster, M A; Mollon, J D

    1994-08-01

    Most models of color vision assume that signals from the three classes of cone receptor are recoded into only three independent post-receptoral channels: one that encodes luminance and two that encode color. Stimuli that are equated for their effects on two of the channels should be discriminable only to the remaining channel, and are thus assumed to isolate the responses of single channels. We used an asymmetric matching task to examine whether such models can account for changes in color appearance following adaptation to contrast--to temporal variations in luminance and chromaticity around a fixed mean luminance and chromaticity. The experiments extend to suprathreshold color appearance the threshold adaptation paradigm of Krauskopf, Williams and Heeley [(1982) Vision Research, 32, 1123-1131]. Adaptation changes the perceived color of chromatic test stimuli both by reducing their saturation (contrast) and by changing their hue (direction within the equiluminant plane). The saturation losses are largest for test stimuli that lie along the chromatic axis defining the adapting modulation, while the hue changes are rotations away from the adapting direction and toward an orthogonal direction within the S and L-M plane. Similar selective changes in both perceived color and perceived lightness occur following adaptation to stimuli that covary in luminance and chromaticity. The selectivity of the aftereffects for multiple directions within color-luminance space is inconsistent with sensitivity changes in only three independent channels. These aftereffects suggest instead that color appearance depends on channels that can be selectively tuned to any color-luminance direction, and that there are no directions that invariably isolate responses in only a single channel. We use the perceived color changes to examine the spectral sensitivities of the chromatic channels and to estimate the distribution of channels. We also examine how adaptation alters the contrast

  8. High capacity image barcodes using color separability

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Oztan, Basak; Sharma, Gaurav

    2011-01-01

    Two-dimensional barcodes are widely used for encoding data in printed documents. In a number of applications, the visual appearance of the barcode constitutes a fundamental restriction. In this paper, we propose high capacity color image barcodes that encode data in an image while preserving its basic appearance. Our method aims at high embedding rates and sacrifices image fidelity in favor of embedding robustness in regions where these two goals conflict with each other. The method operates by utilizing cyan, magenta, and yellow printing channels with elongated dots whose orientations are modulated in order to encode the data. At the receiver, by using the complementary sensor channels to estimate the colorant channels, data is extracted in each individual colorant channel. In order to recover from errors introduced in the channel, error correction coding is employed. Our simulation and experimental results indicate that the proposed method can achieve high encoding rates while preserving the appearance of the base image.

  9. Paper roughness and the color gamut of color laser images

    NASA Astrophysics Data System (ADS)

    Arney, J. S.; Spampata, Michelle; Farnand, Susan; Oswald, Tom; Chauvin, Jim

    2007-01-01

    Common experience indicates the quality of a printed image depends on the choice of the paper used in the printing process. In the current report, we have used a recently developed device called a micro-goniophotometer to examine toner on a variety of substrates fused to varying degrees. The results indicate that the relationship between the printed color gamut and the topography of the substrate paper is a simple one for a color electrophotographic process. If the toner is fused completely to an equilibrium state with the substrate paper, then the toner conforms to the overall topographic features of the substrate. For rougher papers, the steeper topographic features are smoothed out by the toner. The maximum achievable color gamut is limited by the topographic smoothness of the resulting fused surface. Of course, achieving a fully fused surface at a competitive printing rate with a minimum of power consumption is not always feasible. However, the only significant factor found to limit the maximum state of fusing and the ultimate achievable color gamut is the smoothness of the paper.

  10. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, Brian A.

    1987-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficiently by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  11. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  12. Color constancy and the natural image

    NASA Technical Reports Server (NTRS)

    Wandall, Brian A.

    1989-01-01

    Color vision is useful only if it is possible to identify an object's color across many viewing contexts. Here, consideration is given to recent results on how to estimate the surface reflectance function of an object from image data, despite (1) uncertainty in the spectral power distribution of the ambient lighting, and (2) uncertainty about what other surfaces will be in the field of view.

  13. Color Image Magnification: Geometrical Pattern Classification Approach

    NASA Astrophysics Data System (ADS)

    Yong, Tien Fui; Choo, Wou Onn; Meian Kok, Hui

    In an era where technology keeps advancing, it is vital that high-resolution images are available to produce high-quality displayed images and fine-quality prints. The problem is that it is quite impossible to produce high-resolution images with acceptable clarity even with the latest digital cameras. Therefore, there is a need to enlarge the original images using an effective and efficient algorithm. The main contribution of this paper is to produce an enlarge color image with high visual quality, up to four times the original size of 100x100 pixels image. In the classification phase, the basic idea is to separate the interpolation region in the form of geometrical shape. Then, in the intensity determination phase, the interpolator assigns a proper color intensity value to the undefined pixel inside the interpolation region. This paper will discuss about problem statement, literature review, research methodology, research outcome, initial results, and finally, the conclusion.

  14. Color gradient background-oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Mier, Frank Austin; Hargather, Michael J.

    2016-06-01

    Background-oriented schlieren is a method of visualizing refractive disturbances by comparing digital images with and without a refractive disturbance distorting a background pattern. Traditionally, backgrounds consist of random distributions of high-contrast color transitions or speckle patterns. To image a refractive disturbance, a digital image correlation algorithm is used to identify the location and magnitude of apparent pixel shifts in the background pattern between the two images. Here, a novel method of using color gradient backgrounds is explored as an alternative that eliminates the need to perform a complex image correlation between the digital images. A simple image subtraction can be used instead to identify the location, magnitude, and direction of the image distortions. Gradient backgrounds are demonstrated to provide quantitative data only limited by the camera's pixel resolution, whereas speckle backgrounds limit resolution to the size of the random pattern features and image correlation window size. Quantitative measurement of density in a thermal boundary layer is presented. Two-dimensional gradient backgrounds using multiple colors are demonstrated to allow measurement of two-dimensional refractions. A computer screen is used as the background, which allows for rapid modification of the gradient to tune sensitivity for a particular application.

  15. Matching image color from different cameras

    NASA Astrophysics Data System (ADS)

    Fairchild, Mark D.; Wyble, David R.; Johnson, Garrett M.

    2008-01-01

    Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color differences between the measured ColorChecker SG and the reproduced image were 4.0 and 6.1 for the Canon (chart and spectral respectively) and 5.9 and 6.9 for the Nikon. The median differences between cameras were 2.8 and 3.4 for the chart and spectral characterizations, near the expected threshold for reliable image difference perception. Eight scenes were evaluated psychophysically in three forced-choice experiments in which a reference image from one of the cameras was shown to observers in comparison with a pair of images, one from each camera. The three experiments were (1) a comparison of the two cameras with the chart-based characterizations, (2) a comparison with the spectral characterizations, and (3) a comparison of chart vs. spectral characterization within and across cameras. The results for the three experiments are 64%, 64%, and 55% correct respectively. Careful and simple colorimetric characterization of digital SLR cameras can result in visually equivalent color reproduction.

  16. Color night vision method based on the correlation between natural color and dual band night image

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Bai, Lian-fa; Zhang, Chuang; Chen, Qian; Gu, Guo-hua

    2009-07-01

    Color night vision technology can effectively improve the detection and identification probability. Current color night vision method based on gray scale modulation fusion, spectrum field fusion, special component fusion and world famous NRL method, TNO method will bring about serious color distortion, and the observers will be visual tired after long time observation. Alexander Toet of TNO Human Factors presents a method to fuse multiband night image a natural day time color appearance, but it need the true color image of the scene to be observed. In this paper we put forward a color night vision method based on the correlation between natural color image and dual band night image. Color display is attained through dual-band low light level images and their fusion image. Actual color image of the similar scene is needed to obtain color night vision image, the actual color image is decomposed to three gray-scale images of RGB color module, and the short wave LLL image, long wave LLL image and their fusion image are compared to them through gray-scale spatial correlation method, and the color space mapping scheme is confirmed by correlation. Gray-scale LLL images and their fusion image are adjusted through the variation of HSI color space coefficient, and the coefficient matrix is built. Color display coefficient matrix of LLL night vision system is obtained by multiplying the above coefficient matrix and RGB color space mapping matrix. Emulation experiments on general scene dual-band color night vision indicate that the color display effect is approving. This method was experimented on dual channel dual spectrum LLL color night vision experimental apparatus based on Texas Instruments digital video processing device DM642.

  17. Edge and color preserving single image superresolution

    NASA Astrophysics Data System (ADS)

    Tang, Songze; Xiao, Liang; Liu, Pengfei; Zhang, Jun; Huang, Lili

    2014-05-01

    Most existing superresolution (SR) techniques focus primarily on improving the quality in the luminance component of SR images, while paying less attention to the chrominance component. We present an edge and color preserving image SR approach. First, for the luminance channel, a heavy-tailed gradient distribution of natural images is investigated as an image prior. Then, an efficient optimization algorithm is developed to recover the latent high-resolution (HR) luminance component. Second, for the chrominance channels, we propose a two-stage framework for luminance-guided chrominance SR. In the first stage, since most of the shape and structural information is contained in the luminance channel, a simple Markov random field formulation is introduced to search the optimal direction for color local interpolation guided by HR luminance components. To further improve the quality of the chrominance channels, in the second stage, a nonlocal auto regression model is utilized to refine the initial HR chrominance. Finally, we combine the SR reconstructed luminance components with the generated HR chrominance maps to get the final SR color image. Systematic experimental results demonstrated that our method outperforms some state-of-the-art methods in terms of the peak signal-to-noise ratio, structural similarity, feature similarity, and the mean color errors.

  18. Textured surface identification in noisy color images

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet

    1996-06-01

    Automatic identification of textured surfaces is essential in many imaging applications such as image data compression and scene recognition. In these applications, a vision system is required to detect and identify irregular textures in the noisy color images. This work proposes a method for texture field characterization based on the local textural features. We first divide a given color image into n multiplied by n local windows and extract textural features in each window independently. In this step, the size of a window should be small enough so that each window can include only two texture fields. Separation of texture areas in a local window is first carried out by the Otsu or Kullback threshold selection technique on three color components separately. The 3-D class separation is then performed using the Fisher discriminant. The result of local texture classification is combined by the K-means clustering algorithm. The texture fields detected in a window are characterized by their mean vectors and an element-to-set membership relation. We have experimented with the local feature extraction part of the method using a color image of irregular textures. Results show that the method is effective for capturing the local textural features.

  19. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  20. Novel calibration and color adaptation schemes in three-fringe RGB photoelasticity

    NASA Astrophysics Data System (ADS)

    Swain, Digendranath; Thomas, Binu P.; Philip, Jeby; Pillai, S. Annamala

    2015-03-01

    Isochromatic demodulation in digital photoelasticity using RGB calibration is a two step process. The first step involves the construction of a look-up table (LUT) from a calibration experiment. In the second step, isochromatic data is demodulated by matching the colors of an analysis image with the colors existing in the LUT. As actual test and calibration experiment tint conditions vary due to different sources, color adaptation techniques for modifying an existing primary LUT are employed. However, the primary LUT is still generated from bending experiments. In this paper, RGB demodulation based on a theoretically constructed LUT has been attempted to exploit the advantages of color adaptation schemes. Thereby, the experimental mode of LUT generation and some uncertainties therein can be minimized. Additionally, a new color adaptation algorithm is proposed using quadratic Lagrangian interpolation polynomials, which is numerically better than the two-point linear interpolations available in the literature. The new calibration and color adaptation schemes are validated and applied to demodulate fringe orders in live models and stress frozen slices.

  1. Color gradient background oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Mier, Frank Austin; Hargather, Michael

    2015-11-01

    Background oriented schlieren (BOS) imaging is a method of visualizing refractive disturbances through the comparison of digital images. By comparing images with and without a refractive disturbance visualizations can be achieved via a range of image processing methods. Traditionally, backgrounds consist of random distributions of high contrast speckle patterns. To image a refractive disturbance, a digital image correlation algorithm is used to identify the location and magnitude of apparent pixel shifts in the background pattern. Here a novel method of using color gradient backgrounds is explored as an alternative. The gradient background eliminates the need to perform an image correlation between the two digital images, as simple image subtraction can be used to identify the location, magnitude, and direction of the image distortions. This allows for quicker processing. Two-dimensional gradient backgrounds using multiple colors are shown. The gradient backgrounds are demonstrated to provide quantitative data limited only by the camera's pixel resolution, whereas speckle backgrounds limit resolution to the size of the random pattern features and image correlation window size. Additional results include the use of a computer screen as a background.

  2. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data

  3. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  4. Retinal Imaging: Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.

    This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.

  5. Improved colorization for night vision system based on image splitting

    NASA Astrophysics Data System (ADS)

    Ali, E.; Kozaitis, S. P.

    2015-03-01

    The success of a color night navigation system often depends on the accuracy of the colors in the resulting image. Often, small regions can incorrectly adopt the color of large regions simply due to size of the regions. We presented a method to improve the color accuracy of a night navigation system by initially splitting a fused image into two distinct sections before colorization. We split a fused image into two sections, generally road and sky regions, before colorization and processed them separately to obtain improved color accuracy of each region. Using this approach, small regions were colored correctly when compared to not separating regions.

  6. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    NASA Astrophysics Data System (ADS)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  7. Image recognition of diseased rice seeds based on color feature

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-11-01

    The objective of this research is to develop a digital image analysis algorithm for detection of diseased rice seeds based on color features. The rice seeds used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou99 and IIyou3207. Images of rice seeds were acquired with a color machine vision system. Each original RGB image was converted to HSV color space and preprocessed to show, as hue in the seed region while the pixels value of background was zero. The hue values were scaled so that they varied from 0.0 to 1.0. Then six color features were extracted and evaluated for their contributions to seed classification. Determined using Blocks method, the mean hue value shows the strongest classification ability. Parzen windowing function method was used to estimate probability density distribution and a threshold of mean hue was drawn to classify normal seeds and diseased seeds. The average accuracy of test data set is 95% for Jinyou402. Then the feature of hue histogram was extracted for diseased seeds and partitioned into two clusters of spot diseased seeds and severe diseased seeds. Desired results were achieved when the two cancroids locations were used to discriminate the disease degree. Combined with the two features of mean hue and histogram, all seeds could be classified as normal seeds, spot diseased seeds and severe diseased seeds. Finally, the algorithm was implemented for all the five varieties to test the adaptability.

  8. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2008-09-02

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  9. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2009-02-24

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  10. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  11. Three-dimensional color image processing procedures using DSP

    NASA Astrophysics Data System (ADS)

    Rosales, Alberto J.; Ponomaryov, Volodymyr I.; Gallegos-Funes, Francisco

    2007-02-01

    Processing of the vector image information is seemed very important because multichannel sensors used in different applications. We introduce novel algorithms to process color images that are based on order statistics and vectorial processing techniques: Video Adaptive Vector Directional (VAVDF) and the Vector Median M-type K-Nearest Neighbour (VMMKNN) Filters presented in this paper. It has been demonstrated that novel algorithms suppress effectively an impulsive noise in comparison with different other methods in 3D video color sequences. Simulation results have been obtained using video sequences "Miss America" and "Flowers", which were corrupted by noise. The filters: KNNF, VGVDF, VMMKNN, and, finally the proposed VAVDATM have been investigated. The criteria PSNR, MAE and NCD demonstrate that the VAVDATM filter has shown the best performances in each a criterion when intensity of noise is more that 7-10%. An attempt to realize the real-time processing on the DSP is presented for median type algorithms techniques.

  12. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  13. The Artist, the Color Copier, and Digital Imaging.

    ERIC Educational Resources Information Center

    Witte, Mary Stieglitz

    The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…

  14. Autonomous color theme extraction from images using saliency

    NASA Astrophysics Data System (ADS)

    Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.

    2015-03-01

    Color theme (palette) is a collection of color swatches for representing or describing colors in a visual design or an image. Color palettes have broad applications such as serving as means in automatic/semi-automatic design of visual media, as measures in quantifying aesthetics of visual design, and as metrics in image retrieval, image enhancement, and color semantics. In this paper, we suggest an autonomous mechanism for extracting color palettes from an image. Our method is simple and fast, and it works on the notion of visual saliency. By using visual saliency, we extract the fine colors appearing in the foreground along with the various colors in the background regions of an image. Our method accounts for defining different numbers of colors in the palette as well as presenting the proportion of each color according to its visual conspicuity in a given image. This flexibility supports an interactive color palette which may facilitate the designer's color design task. As an application, we present how our extracted color palettes can be utilized as a color similarity metric to enhance the current color semantic based image retrieval techniques.

  15. Color image registration based on quaternion Fourier transformation

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wang, Zhengzhi

    2012-05-01

    The traditional Fourier Mellin transform is applied to quaternion algebra in order to investigate quaternion Fourier transformation properties useful for color image registration in frequency domain. Combining with the quaternion phase correlation, we propose a method for color image registration based on the quaternion Fourier transform. The registration method, which processes color image in a holistic manner, is convenient to realign color images differing in translation, rotation, and scaling. Experimental results on different types of color images indicate that the proposed method not only obtains high accuracy in similarity transform in the image plane but also is computationally efficient.

  16. TECHNIQUE FOR ENHANCING DIGITAL COLOR IMAGES BY CONTRAST STRETCHING IN MUNSELL COLOR SPACE.

    USGS Publications Warehouse

    Kruse, Fred A.; Raines, Gary L.

    1984-01-01

    The Munsell color system can be used to further enhance the appearance of high-quality digital color-composite images. A color-balanced 'standard' color-composite image is first produced using any desired contrast stretching algorithm. The stretched digital data are then transformed into the cylindrical Munsell color space. An enhanced version of a color-composite image is produced by stretching the saturation parameter over the full digital range and inverting the modified Munsell coordinates to red-blue-green (tristimulus) data space. The resulting image has greater color-saturation contrast than the original image, without hue change. Contrast stretching in Munsell color space reduces the correlation between individual bands or ratios and is similar to decorrelation processing based on principal-components transforms. However, principal components are based on data variance, with less variance being explained by each higher order component.

  17. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  18. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  19. Rectangular pixels for efficient color image sampling

    NASA Astrophysics Data System (ADS)

    Singh, Tripurari; Singh, Mritunjay

    2011-01-01

    We present CFA designs that faithfully capture images with specified luminance and chrominance bandwidths. Previous academic research has mostly been concerned with maximizing PSNR of reconstructed images without regard to chrominance bandwidth and cross-talk. Commercial systems, on the other hand, pay close attention to both these parameters as well as to the visual quality of reconstructed images. They commonly sacrifice resolution by using a sufficiently aggressive OLPF to achieve low cross-talk and artifact free images. In this paper, we present the so called Chrominance Bandwidth Ratio, r, model in an attempt to capture both the chrominance bandwidth and the cross-talk between the various signals. Next, we examine the effect of tuning photosite aspect ratio, a hitherto neglected design parameter, and show the benefit of setting it at a different value than the pixel aspect ratio of the display. We derive panchromatic CFA patterns that provably minimize the photo-site count for all values of r. An interesting outcome is a CFA design that captures full chrominance bandwidth, yet uses fewer photosites than the venerable color-stripe design. Another interesting outcome is a low cost practical CFA design that captures chrominance at half the resolution of luminance using only 4 unique filter colors, that lends itself to efficient linear demosaicking, and yet vastly outperforms the Bayer CFA with identical number of photosites demosaicked with state of the art compute-intensive nonlinear algorithms.

  20. Structure preserving color deconvolution for immunohistochemistry images

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Srinivas, Chukka

    2015-03-01

    Immunohistochemistry (IHC) staining is an important technique for the detection of one or more biomarkers within a single tissue section. In digital pathology applications, the correct unmixing of the tissue image into its individual constituent dyes for each biomarker is a prerequisite for accurate detection and identification of the underlying cellular structures. A popular technique thus far is the color deconvolution method1 proposed by Ruifrok et al. However, Ruifrok's method independently estimates the individual dye contributions at each pixel which potentially leads to "holes and cracks" in the cells in the unmixed images. This is clearly inadequate since strong spatial dependencies exist in the tissue images which contain rich cellular structures. In this paper, we formulate the unmixing algorithm into a least-square framework of image patches, and propose a novel color deconvolution method which explicitly incorporates the spatial smoothness and structure continuity constraint into a neighborhood graph regularizer. An analytical closed-form solution to the cost function is derived for this algorithm for fast implementation. The algorithm is evaluated on a clinical data set containing a number of 3,3-Diaminobenzidine (DAB) and hematoxylin (HTX) stained IHC slides and demonstrates better unmixing results than the existing strategy.

  1. Invariant quaternion radial harmonic Fourier moments for color image retrieval

    NASA Astrophysics Data System (ADS)

    Xiang-yang, Wang; Wei-yi, Li; Hong-ying, Yang; Pan-pan, Niu; Yong-wei, Li

    2015-03-01

    Moments and moment invariants have become a powerful tool in image processing owing to their image description capability and invariance property. But, conventional methods are mainly introduced to deal with the binary or gray-scale images, and the only approaches for color image always have poor color image description capability. Based on radial harmonic Fourier moments (RHFMs) and quaternion, we introduced the quaternion radial harmonic Fourier moments (QRHFMs) for representing color images in this paper, which can be seen as the generalization of RHFMs for gray-level images. It is shown that the QRHFMs can be obtained from the RHFMs of each color channel. We derived and analyzed the rotation, scaling, and translation (RST) invariant property of QRHFMs. We also discussed the problem of color image retrieval using invariant QRHFMs. Experimental results are provided to illustrate the efficiency of the proposed color image representation.

  2. A dendritic lattice neural network for color image segmentation

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Lara-Rodríguez, Luis David; López-Meléndez, Elizabeth

    2015-09-01

    A two-layer dendritic lattice neural network is proposed to segment color images in the Red-Green-Blue (RGB) color space. The two layer neural network is a fully interconnected feed forward net consisting of an input layer that receives color pixel values, an intermediate layer that computes pixel interdistances, and an output layer used to classify colors by hetero-association. The two-layer net is first initialized with a finite small subset of the colors present in the input image. These colors are obtained by means of an automatic clustering procedure such as k-means or fuzzy c-means. In the second stage, the color image is scanned on a pixel by pixel basis where each picture element is treated as a vector and feeded into the network. For illustration purposes we use public domain color images to show the performance of our proposed image segmentation technique.

  3. RGB calibration for color image analysis in machine vision.

    PubMed

    Chang, Y C; Reid, J F

    1996-01-01

    A color calibration method for correcting the variations in RGB color values caused by vision system components was developed and tested in this study. The calibration scheme concentrated on comprehensively estimating and removing the RGB errors without specifying error sources and their effects. The algorithm for color calibration was based upon the use of a standardized color chart and developed as a preprocessing tool for color image analysis. According to the theory of image formation, RGB errors in color images were categorized into multiplicative and additive errors. Multiplicative and additive errors contained various error sources-gray-level shift, a variation in amplification and quantization in camera electronics or frame grabber, the change of color temperature of illumination with time, and related factors. The RGB errors of arbitrary colors in an image were estimated from the RGB errors of standard colors contained in the image. The color calibration method also contained an algorithm for correcting the nonuniformity of illumination in the scene. The algorithm was tested under two different conditions-uniform and nonuniform illumination in the scene. The RGB errors of arbitrary colors in test images were almost completely removed after color calibration. The maximum residual error was seven gray levels under uniform illumination and 12 gray levels under nonuniform illumination. Most residual RGB errors were caused by residual nonuniformity of illumination in images, The test results showed that the developed method was effective in correcting the variations in RGB color values caused by vision system components. PMID:18290059

  4. Mosaicking of NEAR MSI Color Image Sequences

    NASA Astrophysics Data System (ADS)

    Digilio, J. G.; Robinson, M. S.

    2004-05-01

    Of the over 160,000 frames of 433 Eros captured by the NEAR-Shoemaker spacecraft, 21,936 frames are components of 226 multi-spectral image sequences. As part of the ongoing NEAR Data Analysis Program, we are mosaicking (and delivering via a web interface) all color sequences in two versions: I/F and photometrically normalized I/F (30° incidence, 0° emission). Multi-spectral sets were acquired with varying bandpasses depending on mission constraints, and all sets include 550-nm, 760-nm, and 950-nm (32% of the sequences are all wavelengths except 700-nm clear filter). Resolutions range from 20 m/pixel down to 3.5 m/pixel. To support color analysis and interpretation we are co-registering the highest resolution black and white images to match each of the color mosaics. Due to Eros's highly irregular shape, the scale of a pixel can vary by almost a factor of 2 within a single frame acquired in the 35-km orbit. Thus, map-projecting requires a pixel-by-pixel correction for local topography [1]. Scattered light problems with the NEAR Multi-Spectral Imager (MSI) required the acquisition of ride along zero exposure calibration frames. Without correction, scattered light artifacts within the MSI were larger than the subtle color differences found on Eros [see details in 2]. Successful correction requires that the same region of the surface (within a few pixels) be in the field-of-view of the zero-exposure frame as when the normal frame was acquired. Due to engineering constraints the timing of frame acquisition was not always optimal for the scattered light correction. During the co-registration process we are tracking apparent ground motion during a sequence to estimate the efficacy of the correction, and thus integrity of the color information. Currently several web-based search and browse tools allow interested users to locate individual MSI frames from any spot on the asteroid using various search criteria (cps.earth.northwestern.edu). Final color and BW map products

  5. Color enhancement in multispectral image of human skin

    NASA Astrophysics Data System (ADS)

    Mitsui, Masanori; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2003-07-01

    Multispectral imaging is receiving attention in medical color imaging, as high-fidelity color information can be acquired by the multispectral image capturing. On the other hand, as color enhancement in medical color image is effective for distinguishing lesion from normal part, we apply a new technique for color enhancement using multispectral image to enhance the features contained in a certain spectral band, without changing the average color distribution of original image. In this method, to keep the average color distribution, KL transform is applied to spectral data, and only high-order KL coefficients are amplified in the enhancement. Multispectral images of human skin of bruised arm are captured by 16-band multispectral camera, and the proposed color enhancement is applied. The resultant images are compared with the color images reproduced assuming CIE D65 illuminant (obtained by natural color reproduction technique). As a result, the proposed technique successfully visualizes unclear bruised lesions, which are almost invisible in natural color images. The proposed technique will provide support tool for the diagnosis in dermatology, visual examination in internal medicine, nursing care for preventing bedsore, and so on.

  6. Imaging an Adapted Dentoalveolar Complex

    PubMed Central

    Herber, Ralf-Peter; Fong, Justine; Lucas, Seth A.; Ho, Sunita P.

    2012-01-01

    Adaptation of a rat dentoalveolar complex was illustrated using various imaging modalities. Micro-X-ray computed tomography for 3D modeling, combined with complementary techniques, including image processing, scanning electron microscopy, fluorochrome labeling, conventional histology (H&E, TRAP), and immunohistochemistry (RANKL, OPN) elucidated the dynamic nature of bone, the periodontal ligament-space, and cementum in the rat periodontium. Tomography and electron microscopy illustrated structural adaptation of calcified tissues at a higher resolution. Ongoing biomineralization was analyzed using fluorochrome labeling, and by evaluating attenuation profiles using virtual sections from 3D tomographies. Osteoclastic distribution as a function of anatomical location was illustrated by combining histology, immunohistochemistry, and tomography. While tomography and SEM provided past resorption-related events, future adaptive changes were deduced by identifying matrix biomolecules using immunohistochemistry. Thus, a dynamic picture of the dentoalveolar complex in rats was illustrated. PMID:22567314

  7. Color contrast enhancement method of infrared polarization fused image

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Xie, Chen

    2015-10-01

    As the traditional color fusion method based on color transfer algorithm has an issue that the color of target and background is similar. A kind of infrared polarization image color fusion method based on color contrast enhancement was proposed. Firstly the infrared radiation intensity image and the polarization image were color fused, and then color transfer technology was used between color reference image and initial fused image in the YCbCr color space. Secondly Otsu segmentation method was used to extract the target area image from infrared polarization image. Lastly the H,S,I component of the color fusion image which obtained by color transfer was adjusted to obtain the final fused image by using target area in the HSI space. Experimental results show that, the fused result which obtained by the proposed method is rich in detail and makes the contrast of target and background more outstanding. And then the ability of target detection and identification can be improved by the method.

  8. Color Sequence of Triton Approach Images

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Triton Voyager 2 approach sequence with latitude-longitude grid superposed. The color image was reconstructed by making a computer composite of three black and white images taken through red, green, and blue filters. Details on Triton's surface unfold dramatically in this sequence of approach images. South Pole near the bottom of the images at the convergence of lines of longitude. Resolution changes from about 60 km/pixel (37 mi/pixel) in the image at upper left taken from a distance of 500,000 (311,000 mi) to about 5 km/pixel (3.1 mi/pixel) for the image at lower right. Global and regional albedo features are visible in all of the images. The albedo features can be tracked in successive images and show that Triton has undergone about 3/4 of a rotation during the 4.3-day interval over which these images were obtained. A southern polar cap of bright pink, yellow, and white materials covers nearly all of the southern hemisphere; these materials consist of nitrogen ice with traces of other substances, including frozen methane and carbon monoxide. Feeble ultraviolet radiation from the sun is thought to act on methane to cause chemical reactions to the pinkish yellowish substances. At the time of the Voyager 2 flyby (Jan. 1989) Triton's southern hemisphere was starting the summer season and the South Pole was canted toward the sun day and night, such that the polar cap was sublimating under the relatively 'hot' summer sun (surface temperature about 38 K, about -391 degree F). Numerous dark streaks on the southern polar nitrogen-ice cap are thought to consist of dark dust deposited by prevailing winds in Triton's tenuous nitrogen atmosphere. A bluish band, seen in all of the images, nearly circumstances Triton's equator; this band is thought to consist of fairly nitrogen frost, perhaps deposited in the decade prior to Voyager 2's flyby.

  9. Hyperspectral image analysis using artificial color

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  10. A novel method of target recognition based on 3D-color-space locally adaptive regression kernels model

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqi; Han, Jing; Zhang, Yi; Bai, Lianfa

    2015-10-01

    Locally adaptive regression kernels model can describe the edge shape of images accurately and graphic trend of images integrally, but it did not consider images' color information while the color is an important element of an image. Therefore, we present a novel method of target recognition based on 3-D-color-space locally adaptive regression kernels model. Different from the general additional color information, this method directly calculate the local similarity features of 3-D data from the color image. The proposed method uses a few examples of an object as a query to detect generic objects with incompact, complex and changeable shapes. Our method involves three phases: First, calculating the novel color-space descriptors from the RGB color space of query image which measure the likeness of a voxel to its surroundings. Salient features which include spatial- dimensional and color -dimensional information are extracted from said descriptors, and simplifying them to construct a non-similar local structure feature set of the object class by principal components analysis (PCA). Second, we compare the salient features with analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. Then the similar structures in the target image are obtained using local similarity structure statistical matching. Finally, we use the method of non-maxima suppression in the similarity image to extract the object position and mark the object in the test image. Experimental results demonstrate that our approach is effective and accurate in improving the ability to identify targets.

  11. Color-invariant three-dimensional feature descriptor for color-shift-model-based image processing

    NASA Astrophysics Data System (ADS)

    Lim, Joohyun; Paik, Joonki

    2011-11-01

    We present a novel color-invariant depth feature descriptor for color-shift-model (CSM)-based image processing. Color images acquired by a single camera equipped with multiple color-filter aperture (MCA) contain depth-dependent color misalignment. The amount and direction of the misalignment provides object's distance from the camera. The CSM-based image processing, which represents the combined image-acquisition and depth-estimation framework, requires a color-invariant feature descriptor that can convey depth information. For improving depth-estimation performance, color boosting is performed on a color image acquired by the MCA camera, and CSM-based channel-shifting descriptor vectors, or channel-shifting vectors (CSVs), are generated by using the feasibility test. Color-invariant features are also extracted in the luminance image. The proposed color-invariant three-dimensional (3-D) feature descriptor is finally obtained by combining the CSVs and luminance features. We present experimental analysis of the proposed feature descriptor and show that the descriptors are proportional to the depth of an object. The proposed descriptor can be used for feature-based image matching in various applications, including 3-D scene modeling, 3-D object recognition, 3-D video tracking, and multifocusing, to name a few.

  12. Acceleration of color computer-generated hologram from RGB-D images using color space conversion

    NASA Astrophysics Data System (ADS)

    Hiyama, Daisuke; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2015-04-01

    We report acceleration of color computer-generated holograms (CGHs) from three dimensional (3D) scenes that are expressed as RGB and depth (D) images. These images are captured by a depth camera and depth buffer of 3D graphics library. RGB and depth images preserve color and depth information of 3D scene, respectively. Then we can regard them as two-dimensional (2D) section images along the depth direction. In general, convolution-based diffraction such as the angular spectrum method is used in calculating CGHs from the 2D section images. However, it takes enormous amount of time because of multiple diffraction calculations. In this paper, we first describe 'band-limited double-step Fresnel diffraction (BL-DSF)' which can accelerate the diffraction calculation than convolution-based diffraction. Next, we describe acceleration of color CGH using color space conversion. Color CGHs are generally calculated on RGB color space; however, we need to perform the same calculations for each color component repeatedly, so that computational cost of color CGH calculation is three times as that of monochrome CGH calculation. Instead, we use YCbCr color space because the 2D section images on YCbCr color space can be down-sampled without deterioration of the image quality.

  13. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  14. Application of new advanced CNN structure with adaptive thresholds to color edge detection

    NASA Astrophysics Data System (ADS)

    Deng, Shaojiang; Tian, Yuan; Hu, Xipeng; Wei, Pengcheng; Qin, Mingfu

    2012-04-01

    Color edge detection is much more efficient than gray scale detection when edges exist at the boundary between regions of different colors with no change in intensity. This paper presents adaptive templates, which are capable of detecting various color and intensity changes in color image. To avoid conception of multilayer proposed in literatures, modification has been done to the CNN structure. This modified structure allows a matrix C, which carries the change information of pixels, to replace the control parts in the basic CNN equation. This modification is necessary because in multilayer structure, it faces the challenge of how to represent the intrinsic relationship among each primary layer. Additionally, in order to enhance the accuracy of edge detection, adaptive detection threshold is employed. The adaptive thresholds are considered to be alterable criteria in designing matrix C. The proposed synthetic system not only avoids the problem which is engendered by multi-layers but also exploits full information of pixels themselves. Experimental results prove that the proposed method is efficient.

  15. Color appearance for photorealistic image synthesis

    NASA Astrophysics Data System (ADS)

    Marini, Daniele; Rizzi, Alessandro; Rossi, Maurizio

    2000-12-01

    Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.

  16. Color reproductivity improvement with additional virtual color filters for WRGB image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2013-02-01

    We have developed a high accuracy color reproduction method based on an estimated spectral reflectance of objects using additional virtual color filters for a wide dynamic range WRGB color filter CMOS image sensor. The four virtual color filters are created by multiplying the spectral sensitivity of White pixel by gauss functions which have different central wave length and standard deviation, and the virtual sensor outputs of those virtual filters are estimated from the four real output signals of the WRGB image sensor. The accuracy of color reproduction was evaluated with a Macbeth Color Checker (MCC), and the averaged value of the color difference ΔEab of 24 colors was 1.88 with our approach.

  17. Binarization of color document images via luminance and saturation color features.

    PubMed

    Tsai, Chun-Ming; Lee, Hsi-Jian

    2002-01-01

    This paper presents a novel binarization algorithm for color document images. Conventional thresholding methods do not produce satisfactory binarization results for documents with close or mixed foreground colors and background colors. Initially, statistical image features are extracted from the luminance distribution. Then, a decision-tree based binarization method is proposed, which selects various color features to binarize color document images. First, if the document image colors are concentrated within a limited range, saturation is employed. Second, if the image foreground colors are significant, luminance is adopted. Third, if the image background colors are concentrated within a limited range, luminance is also applied. Fourth, if the total number of pixels with low luminance (less than 60) is limited, saturation is applied; else both luminance and saturation are employed. Our experiments include 519 color images, most of which are uniform invoice and name-card document images. The proposed binarization method generates better results than other available methods in shape and connected-component measurements. Also, the binarization method obtains higher recognition accuracy in a commercial OCR system than other comparable methods. PMID:18244645

  18. High Quality Color Imaging on the Mead Microencapsulated Imaging System Using a Fiber Optic CRT

    NASA Astrophysics Data System (ADS)

    Duke, Ronald J.

    1989-07-01

    Mead Imaging's unique microencapsulated color imaging system (CYCOLOR) has many applications. Mead Imaging and Hughes have combined CYCOLOR and Fiber Optic Cathode Ray Tubes (FOCRT) to develop digital color printers.

  19. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  20. Mississippi Delta, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The geography of the New Orleans and Mississippi delta region is well shown in this radar image from the Shuttle Radar Topography Mission. In this image, bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is situated along the southern shore of Lake Pontchartrain, the large, roughly circular lake near the center of the image. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest over water highway bridge. Major portions of the city of New Orleans are below sea level, and although it is protected by levees and sea walls, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data

  1. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  2. A local adaptive image descriptor

    NASA Astrophysics Data System (ADS)

    Zahid Ishraque, S. M.; Shoyaib, Mohammad; Abdullah-Al-Wadud, M.; Monirul Hoque, Md; Chae, Oksam

    2013-12-01

    The local binary pattern (LBP) is a robust but computationally simple approach in texture analysis. However, LBP performs poorly in the presence of noise and large illumination variation. Thus, a local adaptive image descriptor termed as LAID is introduced in this proposal. It is a ternary pattern and is able to generate persistent codes to represent microtextures in a given image, especially in noisy conditions. It can also generate stable texture codes if the pixel intensities change abruptly due to the illumination changes. Experimental results also show the superiority of the proposed method over other state-of-the-art methods.

  3. Preliminary images from an adaptive imaging system.

    PubMed

    Griffiths, J A; Metaxas, M G; Pani, S; Schulerud, H; Esbrand, C; Royle, G J; Price, B; Rokvic, T; Longo, R; Asimidis, A; Bletsas, E; Cavouras, D; Fant, A; Gasiorek, P; Georgiou, H; Hall, G; Jones, J; Leaver, J; Li, G; Machin, D; Manthos, N; Matheson, J; Noy, M; Ostby, J M; Psomadellis, F; van der Stelt, P F; Theodoridis, S; Triantis, F; Turchetta, R; Venanzi, C; Speller, R D

    2008-06-01

    I-ImaS (Intelligent Imaging Sensors) is a European project aiming to produce real-time adaptive X-ray imaging systems using Monolithic Active Pixel Sensors (MAPS) to create images with maximum diagnostic information within given dose constraints. Initial systems concentrate on mammography and cephalography. In our system, the exposure in each image region is optimised and the beam intensity is a function of tissue thickness and attenuation, and also of local physical and statistical parameters in the image. Using a linear array of detectors, the system will perform on-line analysis of the image during the scan, followed by optimisation of the X-ray intensity to obtain the maximum diagnostic information from the region of interest while minimising exposure of diagnostically less important regions. This paper presents preliminary images obtained with a small area CMOS detector developed for this application. Wedge systems were used to modulate the beam intensity during breast and dental imaging using suitable X-ray spectra. The sensitive imaging area of the sensor is 512 x 32 pixels 32 x 32 microm(2) in size. The sensors' X-ray sensitivity was increased by coupling to a structured CsI(Tl) scintillator. In order to develop the I-ImaS prototype, the on-line data analysis and data acquisition control are based on custom-developed electronics using multiple FPGAs. Images of both breast tissues and jaw samples were acquired and different exposure optimisation algorithms applied. Results are very promising since the average dose has been reduced to around 60% of the dose delivered by conventional imaging systems without decrease in the visibility of details. PMID:18291697

  4. Color Voyager 2 Image Showing Crescent Uranus

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This image shows a crescent Uranus, a view that Earthlings never witnessed until Voyager 2 flew near and then beyond Uranus on January 24, 1986. This planet's natural blue-green color is due to the absorption of redder wavelengths in the atmosphere by traces of methane gas. Uranus' diameter is 32,500 miles, a little over four times that of Earth. The hazy blue-green atmosphere probably extends to a depth of around 5,400 miles, where it rests above what is believed to be an icy or liquid mixture (an 'ocean') of water, ammonia, methane, and other volatiles, which in turn surrounds a rocky core perhaps a little smaller than Earth.

  5. Color Doppler imaging of retinal diseases.

    PubMed

    Dimitrova, Galina; Kato, Satoshi

    2010-01-01

    Color Doppler imaging (CDI) is a widely used method for evaluating ocular circulation that has been used in a number of studies on retinal diseases. CDI assesses blood velocity parameters by using ultrasound waves. In ophthalmology, these assessments are mainly performed on the retrobulbar blood vessels: the ophthalmic, the central retinal, and the short posterior ciliary arteries. In this review, we discuss CDI use for the assessment of retinal diseases classified into the following: vascular diseases, degenerations, dystrophies, and detachment. The retinal vascular diseases that have been investigated by CDI include diabetic retinopathy, retinal vein occlusions, retinal artery occlusions, ocular ischemic conditions, and retinopathy of prematurity. Degenerations and dystrophies included in this review are age-related macular degeneration, myopia, and retinitis pigmentosa. CDI has been used for the differential diagnosis of retinal detachment, as well as the evaluation of retrobulbar circulation in this condition. CDI is valuable for research and is a potentially useful diagnostic tool in the clinical setting. PMID:20385332

  6. Tiny Devices Project Sharp, Colorful Images

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  7. Edge detection, color quantization, segmentation, texture removal, and noise reduction of color image using quaternion iterative filtering

    NASA Astrophysics Data System (ADS)

    Hsiao, Yu-Zhe; Pei, Soo-Chang

    2014-07-01

    Empirical mode decomposition (EMD) is a simple, local, adaptive, and efficient method for nonlinear and nonstationary signal analysis. However, for dealing with multidimensional signals, EMD and its variants such as bidimensional EMD (BEMD) and multidimensional EMD (MEMD) are very slow due to the needs of a large amount of envelope interpolations. Recently, a method called iterative filtering has been proposed. This filtering-based method is not as precise as EMD but its processing speed is very fast and can achieve comparable results as EMD does in many image and signal processing applications. We combine quaternion algebra and iterative filtering to achieve the edge detection, color quantization, segmentation, texture removal, and noise reduction task of color images. We can obtain similar results by using quaternion combined with EMD; however, as mentioned before, EMD is slow and cumbersome. Therefore, we propose to use quaternion iterative filtering as an alternative method for quaternion EMD (QEMD). The edge of color images can be detected by using intrinsic mode functions (IMFs) and the color quantization results can be obtained from residual image. The noise reduction algorithm of our method can be used to deal with Gaussian, salt-and-pepper, speckle noise, etc. The peak signal-to-noise ratio results are satisfactory and the processing speed is also very fast. Since textures in a color image are high-frequency components, we also can use quaternion iterative filtering to decompose a color image into many high- and low-frequency IMFs and remove textures by eliminating high-frequency IMFs.

  8. Natural and seamless image composition with color control.

    PubMed

    Yang, Wenxian; Zheng, Jianmin; Cai, Jianfei; Rahardja, Susanto; Chen, Chang Wen

    2009-11-01

    While the state-of-the-art image composition algorithms subtly handle the object boundary to achieve seamless image copy-and-paste, it is observed that they are unable to preserve the color fidelity of the source object, often require quite an amount of user interactions, and often fail to achieve realism when there exists salient discrepancy between the background textures in the source and destination images. These observations motivate our research towards color controlled natural and seamless image composition with least user interactions. In particular, based on the Poisson image editing framework, we first propose a variational model that considers both the gradient constraint and the color fidelity. The proposed model allows users to control the coloring effect caused by gradient domain fusion. Second, to have less user interactions, we propose a distance-enhanced random walks algorithm, through which we avoid the necessity of accurate image segmentation while still able to highlight the foreground object. Third, we propose a multiresolution framework to perform image compositions at different subbands so as to separate the texture and color components to simultaneously achieve smooth texture transition and desired color control. The experimental results demonstrate that our proposed framework achieves better and more realistic results for images with salient background color or texture differences, while providing comparable results as the state-of-the-art algorithms for images without the need of preserving the object color fidelity and without significant background texture discrepancy. PMID:19596637

  9. A probabilistic approach for color correction in image mosaicking applications.

    PubMed

    Oliveira, Miguel; Sappa, Angel Domingo; Santos, Vitor

    2015-02-01

    Image mosaicking applications require both geometrical and photometrical registrations between the images that compose the mosaic. This paper proposes a probabilistic color correction algorithm for correcting the photometrical disparities. First, the image to be color corrected is segmented into several regions using mean shift. Then, connected regions are extracted using a region fusion algorithm. Local joint image histograms of each region are modeled as collections of truncated Gaussians using a maximum likelihood estimation procedure. Then, local color palette mapping functions are computed using these sets of Gaussians. The color correction is performed by applying those functions to all the regions of the image. An extensive comparison with ten other state of the art color correction algorithms is presented, using two different image pair data sets. Results show that the proposed approach obtains the best average scores in both data sets and evaluation metrics and is also the most robust to failures. PMID:25438315

  10. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  11. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-06-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired.

  12. Demosaiced pixel super-resolution for multiplexed holographic color imaging.

    PubMed

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  13. Efficient text segmentation and adaptive color error diffusion for text enhancement

    NASA Astrophysics Data System (ADS)

    Kwon, Jae-Hyun; Park, Tae-Yong; Kim, Yun-Tae; Cho, Yang-Ho; Ha, Yeong-Ho

    2005-01-01

    This paper proposes an adaptive error diffusion algorithm for text enhancement followed by an efficient text segmentation that uses the maximum gradient difference (MGD). The gradients are calculated along with scan lines, then the MGD values are filled within a local window to merge text segments. If the value is above a threshold, the pixel is considered as potential text. Isolated segments are then eliminated in a non-text region filtering process. After the text segmentation, a conventional error diffusion method is applied to the background, while edge enhancement error diffusion is used for the text. Since it is inevitable that visually objectionable artifacts are generated when using two different halftoning algorithms, gradual dilation is proposed to minimize the boundary artifacts in the segmented text blocks before halftoning. Sharpening based on the gradually dilated text region (GDTR) then prevents the printing of successive dots around the text region boundaries. The method is extended to halftone color images to sharpen the text regions. The proposed adaptive error diffusion algorithm involves color halftoning that controls the amount of edge enhancement using a general error filter. However, edge enhancement unfortunately produces color distortion, as edge enhancement and color difference are trade-offs. The multiplicative edge enhancement parameters are selected based on the amount of edge sharpening and color difference. Plus, an additional error factor is introduced to reduce the dot elimination artifact generated by the edge enhancement error diffusion. In experiments, the text of a scanned image was sharper when using the proposed algorithm than with conventional error diffusion without changing the background.

  14. Efficient text segmentation and adaptive color error diffusion for text enhancement

    NASA Astrophysics Data System (ADS)

    Kwon, Jae-Hyun; Park, Tae-Yong; Kim, Yun-Tae; Cho, Yang-Ho; Ha, Yeong-Ho

    2004-12-01

    This paper proposes an adaptive error diffusion algorithm for text enhancement followed by an efficient text segmentation that uses the maximum gradient difference (MGD). The gradients are calculated along with scan lines, then the MGD values are filled within a local window to merge text segments. If the value is above a threshold, the pixel is considered as potential text. Isolated segments are then eliminated in a non-text region filtering process. After the text segmentation, a conventional error diffusion method is applied to the background, while edge enhancement error diffusion is used for the text. Since it is inevitable that visually objectionable artifacts are generated when using two different halftoning algorithms, gradual dilation is proposed to minimize the boundary artifacts in the segmented text blocks before halftoning. Sharpening based on the gradually dilated text region (GDTR) then prevents the printing of successive dots around the text region boundaries. The method is extended to halftone color images to sharpen the text regions. The proposed adaptive error diffusion algorithm involves color halftoning that controls the amount of edge enhancement using a general error filter. However, edge enhancement unfortunately produces color distortion, as edge enhancement and color difference are trade-offs. The multiplicative edge enhancement parameters are selected based on the amount of edge sharpening and color difference. Plus, an additional error factor is introduced to reduce the dot elimination artifact generated by the edge enhancement error diffusion. In experiments, the text of a scanned image was sharper when using the proposed algorithm than with conventional error diffusion without changing the background.

  15. Domain adaptation for microscopy imaging.

    PubMed

    Becker, Carlos; Christoudias, C Mario; Fua, Pascal

    2015-05-01

    Electron and light microscopy imaging can now deliver high-quality image stacks of neural structures. However, the amount of human annotation effort required to analyze them remains a major bottleneck. While machine learning algorithms can be used to help automate this process, they require training data, which is time-consuming to obtain manually, especially in image stacks. Furthermore, due to changing experimental conditions, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific one on another. This means that this tedious annotation process has to be repeated for each new stack. In this paper, we present a domain adaptation algorithm that addresses this issue by effectively leveraging labeled examples across different acquisitions and significantly reducing the annotation requirements. Our approach can handle complex, nonlinear image feature transformations and scales to large microscopy datasets that often involve high-dimensional feature spaces and large 3D data volumes. We evaluate our approach on four challenging electron and light microscopy applications that exhibit very different image modalities and where annotation is very costly. Across all applications we achieve a significant improvement over the state-of-the-art machine learning methods and demonstrate our ability to greatly reduce human annotation effort. PMID:25474809

  16. EVALUATION OF COLOR ALTERATION ON FABRICS BY IMAGE ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evaluation of color changes is usually done manually and is often inconsistent. Image analysis provides a method in which to evaluate color-related testing that is not only simple, but also consistent. Image analysis can also be used to measure areas that were considered too large for the colorimet...

  17. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  18. Imaging Radio Galaxies with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    de Vries, W. H.; van Breugel, W. J. M.; Quirrenbach, A.; Roberts, J.; Fidkowski, K.

    2000-12-01

    We present 42 milli-arcsecond resolution Adaptive Optics near-infrared images of 3C 452 and 3C 294, two powerful radio galaxies at z=0.081 and z=1.79 respectively, obtained with the NIRSPEC/SCAM+AO instrument on the Keck telescope. The observations provide unprecedented morphological detail of radio galaxy components like nuclear dust-lanes, off-centered or binary nuclei, and merger induced starforming structures; all of which are key features in understanding galaxy formation and the onset of powerful radio emission. Complementary optical HST imaging data are used to construct high resolution color images, which, for the first time, have matching optical and near-IR resolutions. Based on these maps, the extra-nuclear structural morphologies and compositions of both galaxies are discussed. Furthermore, detailed brightness profile analysis of 3C 452 allows a direct comparison to a large literature sample of nearby ellipticals, all of which have been observed in the optical and near-IR by HST. Both the imaging data and the profile information on 3C 452 are consistent with it being a relative diminutive and well-evolved elliptical, in stark contrast to 3C 294 which seems to be in its initial formation throes with an active AGN off-centered from the main body of the galaxy. These results are discussed further within the framework of radio galaxy triggering and the formation of massive ellipticals. The work of WdV and WvB was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. The work at UCSD has been supported by the NSF Science and Technology Center for Adaptive Optics, under agreement No. AST-98-76783.

  19. Local intensity adaptive image coding

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1989-01-01

    The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.

  20. Color image encryption scheme using CML and DNA sequence operations.

    PubMed

    Wang, Xing-Yuan; Zhang, Hui-Li; Bao, Xue-Mei

    2016-06-01

    In this paper, an encryption algorithm for color images using chaotic system and DNA (Deoxyribonucleic acid) sequence operations is proposed. Three components for the color plain image is employed to construct a matrix, then perform confusion operation on the pixels matrix generated by the spatiotemporal chaos system, i.e., CML (coupled map lattice). DNA encoding rules, and decoding rules are introduced in the permutation phase. The extended Hamming distance is proposed to generate new initial values for CML iteration combining color plain image. Permute the rows and columns of the DNA matrix and then get the color cipher image from this matrix. Theoretical analysis and experimental results prove the cryptosystem secure and practical, and it is suitable for encrypting color images of any size. PMID:27026385

  1. Exploring the use of memory colors for image enhancement

    NASA Astrophysics Data System (ADS)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  2. Processing halftone color images by vector space methods.

    PubMed

    Liu, Li; Yang, Yongyi; Stark, Henry

    2006-02-01

    The reproduction of color images by color halftoning can be characterized by the Neugebauer model/equation. However, the Neugebauer equation is not easy to solve because of the highly nonlinear relationship between the underlying Neugebauer primaries and the colorants. We attempt to solve the Neugebauer equation by vector space methods. The proposed method of solution is applicable to any number of colorants, although our experimental results are confined to the CMY and CMYK cases. Among the constraints we consider are those related to a bound on the permissible amount of total ink and a bound on the total cost of applying colorants to achieve a satisfactory level of color reproduction. Our results demonstrate that the vector space method is a feasible approach for solving for the required amounts of colorants in the constrained color halftoning problem. PMID:16477829

  3. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  4. Evaluation of color error and noise on simulated images

    NASA Astrophysics Data System (ADS)

    Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle

    2010-01-01

    The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.

  5. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. PMID:25400037

  6. Color preservation for tone reproduction and image enhancement

    NASA Astrophysics Data System (ADS)

    Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh

    2014-01-01

    Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.

  7. Unsupervised color image segmentation using a lattice algebra clustering technique

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  8. Adaptive image segmentation by quantization

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Yun, David Y.

    1992-12-01

    Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.

  9. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images

    PubMed Central

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit

    2015-01-01

    Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571

  10. SWT voting-based color reduction for text detection in natural scene images

    NASA Astrophysics Data System (ADS)

    Ikica, Andrej; Peer, Peter

    2013-12-01

    In this article, we propose a novel stroke width transform (SWT) voting-based color reduction method for detecting text in natural scene images. Unlike other text detection approaches that mostly rely on either text structure or color, the proposed method combines both by supervising text-oriented color reduction process with additional SWT information. SWT pixels mapped to color space vote in favor of the color they correspond to. Colors receiving high SWT vote most likely belong to text areas and are blocked from being mean-shifted away. Literature does not explicitly address SWT search direction issue; thus, we propose an adaptive sub-block method for determining correct SWT direction. Both SWT voting-based color reduction and SWT direction determination methods are evaluated on binary (text/non-text) images obtained from a challenging Computer Vision Lab optical character recognition database. SWT voting-based color reduction method outperforms the state-of-the-art text-oriented color reduction approach.

  11. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  12. An innovative lossless compression method for discrete-color images.

    PubMed

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average. PMID:25330487

  13. Refinement of Colored Mobile Mapping Data Using Intensity Images

    NASA Astrophysics Data System (ADS)

    Yamakawa, T.; Fukano, K.; Onodera, R.; Masuda, H.

    2016-06-01

    Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  14. Spatial imaging in color and HDR: prometheus unchained

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  15. Nonlocal Mumford-Shah regularizers for color image restoration.

    PubMed

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F; Vese, Luminita A

    2011-06-01

    We propose here a class of restoration algorithms for color images, based upon the Mumford-Shah (MS) model and nonlocal image information. The Ambrosio-Tortorelli and Shah elliptic approximations are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, texture is nonlocal in nature and requires semilocal/non-local information for efficient image denoising and restoration. Inspired from recent works (nonlocal means of Buades, Coll, Morel, and nonlocal total variation of Gilboa, Osher), we extend the local Ambrosio-Tortorelli and Shah approximations to MS functional (MS) to novel nonlocal formulations, for better restoration of fine structures and texture. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, color image super-resolution, and color filter array demosaicing. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. We also prove several characterizations of minimizers based upon dual norm formulations. PMID:21078579

  16. Objective color classification of ecstasy tablets by hyperspectral imaging.

    PubMed

    Edelman, Gerda; Lopatka, Martin; Aalders, Maurice

    2013-07-01

    The general procedure followed in the examination of ecstasy tablets for profiling purposes includes a color description, which depends highly on the observers' perception. This study aims to provide objective quantitative color information using visible hyperspectral imaging. Both self-manufactured and illicit tablets, created with different amounts of known colorants were analyzed. We derived reflectance spectra from hyperspectral images of these tablets, and successfully determined the most likely colorant used in the production of all self-manufactured tablets and four of five illicit tablets studied. Upon classification, the concentration of the colorant was estimated using a photon propagation model and a single reference measurement of a tablet of known concentration. The estimated concentrations showed a high correlation with the actual values (R(2) = 0.9374). The achieved color information, combined with other physical and chemical characteristics, can provide a powerful tool for the comparison of tablet seizures, which may reveal their origin. PMID:23683098

  17. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  18. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  19. Color image reproduction: the evolution from print to multimedia

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.

    1997-02-01

    The electronic pre-press industry has undergone a very rapid evolution over the past decade, driven by the accelerating performance of desktop computers and affordable application software for image manipulation, page layout and color separation. These have been supported by the steady development of colo scanners, digital cameras, proof printers, RIPs and image setters, all of which make the process of reproducing color images in print easier than ever before. But is color print itself in decline as a medium? New channels of delivery for digital color images include CD-ROM, wideband networks and the Internet, with soft-copy screen display competing with hard-copy print for applications ranging from corporate brochures to home shopping. Present indications are that the most enduring of the graphic arts skills in the new multimedia world will be image rendering and production control rather than those related to photographic film and ink on paper.

  20. Colorful holographic imaging reconstruction based on one thin phase plate

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Song, Qiang; Wang, Jian; Yue, Weirui; Zhang, Fang; Huang, Huijie

    2014-11-01

    One method of realizing color holographic imaging using one thin diffractive optical element (DOE) is proposed. This method can reconstruct a two-dimensional color image with one phase plate at user defined distance from DOE. For improving the resolution ratio of reproduced color images, the DOE is optimized by combining Gerchberg-Saxton algorithm and compensation algorithm. To accelerate the computational process, the Graphic Processing Unit (GPU) is used. In the end, the simulation result was analyzed to verify the validity of this method.

  1. Improving color saturation for color managed images rendered using the perceptual intent

    NASA Astrophysics Data System (ADS)

    Marcu, Gabriel G.

    2008-01-01

    In many cases, rendering images using color management approach may result in unsatisfactory color, particularly for cases when the gamut mismatch is large and the source / destination profile pair does not lead to a satisfactory color. This more often the case when images on laptop computer screens with limited color gamut are transferred to print and color management is used. For those cases, we present a method of improving image quality by manipulating the display profile such that the color quality of the printouts is not compromised by the small gamut of the portable display and color management. The basic idea consists of using in the color management pipeline of a virtual gamut that has the role of either the source or of the destination depending on the type of transformation and the gamut size of the source and destination in the color management pipeline. In case the mismatch between the source and destination gamut is under a threshold the virtual gamut is not used. This virtual gamut is constructed directly in the CIE 1931 chromaticity diagram, although other color spaces may be used. A procedure to derive a constant hue line from two adjacent lines is presented. The chromaticities of the virtual gamut are computed based on the replaced gamut chromaticities and a weighting factor computed automatically at the time of rendering. The method proves to give very pleasing results in prints for example and the boost in saturation approximates very well the color enhancement achieved in silver halide photographic prints even for relatively modest print media.

  2. Reflectance model for recto-verso color halftone images

    NASA Astrophysics Data System (ADS)

    Tian, Dongwen; Wang, Qingjuan; Zhang, Yixin

    2012-01-01

    In the color reproduction process, accurately predicting the color of recto-verso images and establishing a spectral reflectance model for halftones images are the great concern project of imaging quality control field. The scattering of light within paper and the ink penetration in the substrate are the key factors, which affect the color reproduction. A reflectance model for recto-verso color halftone prints is introduced in this paper which considers these factors. The paper based on the assumption that the colorant is non-scattering and the assumption that the paper is strong scattering substrate. By the multiple internal reflection between the paper substrate and the print-air interface of light, and the light along oblique path of the Williams-Clapper model, we proposed the color spectral reflectance precise prediction model of recto-verso halftone images. In the study, we propose this model for taking into account ink spreading, a phenomenon that occurs when printing an ink halftone in superposition with one or several solid inks. The ink-spreading model includes nominal-to-effective dot area coverage functions for each of the different ink overprint conditions by the least square curve fitting method, so the functions for physical dot gain of various overprint halftones are given. This model provided a theoretical foundation for color prediction analysis of recto-verso halftone images and the development of image quality detection system.

  3. Colored three-dimensional reconstruction of vehicular thermal infrared images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Leung, Henry; Shen, Zhenyi

    2015-06-01

    Enhancement of vehicular night vision thermal infrared images is an important problem in intelligent vehicles. We propose to create a colorful three-dimensional (3-D) display of infrared images for the vehicular night vision assistant driving system. We combine the plane parameter Markov random field (PP-MRF) model-based depth estimation with classification-based infrared image colorization to perform colored 3-D reconstruction of vehicular thermal infrared images. We first train the PP-MRF model to learn the relationship between superpixel features and plane parameters. The infrared images are then colorized and we perform superpixel segmentation and feature extraction on the colorized images. The PP-MRF model is used to estimate the superpixel plane parameter and to analyze the structure of the superpixels according to the characteristics of vehicular thermal infrared images. Finally, we estimate the depth of each pixel to perform 3-D reconstruction. Experimental results demonstrate that the proposed method can give a visually pleasing and daytime-like colorful 3-D display from a monochromatic vehicular thermal infrared image, which can help drivers to have a better understanding of the environment.

  4. A New Color Image of the Crab Nebula

    NASA Astrophysics Data System (ADS)

    Wainscoat, R. J.; Kormendy, K.

    1997-03-01

    A new color image of the Crab Nebula is presented. This is a $2782 \\times 1904$ pixel mosaic of CCD frames taken through \\B\\ (blue), \\V\\ (green), and \\R\\ (red) filters; it was carefully color balanced so that the Sun would appear white. The resolution of the final image is approximately 0\\farcs8 FWHM. The technique by which this image was constructed is described, and some aspects of the structure of the Crab Nebula revealed by the image are discussed. We also discuss the weaknesses of this technique for producing ``true-color'' images, and describe how our image would differ from what the human eye might see in a very large wide-field telescope. The structure of the inner part of the synchrotron nebula is compared with recent high-resolution images from the {\\it Hubble Space Telescope\\/} and from the Canada-France-Hawaii Telescope. (SECTION: Interstellar Medium and Nebulae)

  5. Ultrasound, color - normal umbilical cord (image)

    MedlinePlus

    ... is a normal color Doppler ultrasound of the umbilical cord performed at 30 weeks gestation. The cord ... the cord, two arteries and one vein. The umbilical cord is connected to the placenta, located in ...

  6. Adaptive Ambient Illumination Based on Color Harmony Model

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ayano; Hirai, Keita; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi

    We investigated the relationship between ambient illumination and psychological effect by applying a modified color harmony model. We verified the proposed model by analyzing correlation between psychological value and modified color harmony score. Experimental results showed the possibility to obtain the best color for illumination using this model.

  7. Semi-Automated Segmentation of Microbes in Color Images

    NASA Astrophysics Data System (ADS)

    Reddy, Chandankumar K.; Liu, Feng-I.; Dazzo, Frank B.

    2003-01-01

    The goal of this work is to develop a system that can semi-automate the detection of multicolored foreground objects in digitized color images that also contain complex and very noisy backgrounds. Although considered a general problem of color image segmentation, our application is microbiology where various colored stains are used to reveal information on the microbes without cultivation. Instead of providing a simple threshold, the proposed system offers an interactive environment whereby the user chooses multiple sample points to define the range of color pixels comprising the foreground microbes of interest. The system then uses the color and spatial distances of these target points to segment the microbes from the confusing background of pixels whose RGB values lie outside the newly defined range and finally finds each cell's boundary using region-growing and mathematical morphology. Some other image processing methods are also applied to enhance the resultant image containing the colored microbes against a noise-free background. The prototype performs with 98% accuracy on a test set compared to ground truth data. The system described here will have many applications in image processing and analysis where one needs to segment typical pixel regions of similar but non-identical colors.

  8. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  9. Skin image reconstruction using Monte Carlo based color generation

    NASA Astrophysics Data System (ADS)

    Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji

    2010-11-01

    We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.

  10. MUNSELL COLOR ANALYSIS OF LANDSAT COLOR-RATIO-COMPOSITE IMAGES OF LIMONITIC AREAS IN SOUTHWEST NEW MEXICO.

    USGS Publications Warehouse

    Kruse, Fred A.

    1984-01-01

    Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.

  11. Color image digitization and analysis for drum inspection

    SciTech Connect

    Muller, R.C.; Armstrong, G.A.; Burks, B.L.; Kress, R.L.; Heckendorn, F.M.; Ward, C.R.

    1993-05-01

    A rust inspection system that uses color analysis to find rust spots on drums has been developed. The system is composed of high-resolution color video equipment that permits the inspection of rust spots on the order of 0.25 cm (0.1-in.) in diameter. Because of the modular nature of the system design, the use of open systems software (X11, etc.), the inspection system can be easily integrated into other environmental restoration and waste management programs. The inspection system represents an excellent platform for the integration of other color inspection and color image processing algorithms.

  12. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature. PMID:26618250

  13. Color Image Restoration Using Nonlocal Mumford-Shah Regularizers

    NASA Astrophysics Data System (ADS)

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.

    We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.

  14. Color image based sorter for separating red and white wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A simple imaging system was developed to inspect and sort wheat samples and other grains at moderate feed-rates (30 kernels/s or 3.5 kg/h). A single camera captured color images of three sides of each kernel by using mirrors, and the images were processed using a personal computer (PC). The camer...

  15. Photographic copy of computer enhanced color photographic image. Photographer and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of computer enhanced color photographic image. Photographer and computer draftsman unknown. Original photographic image located in the office of Modjeski and Masters, Consulting Engineers at 1055 St. Charles Avenue, New Orleans, LA 70130. COMPUTER ENHANCED COLOR PHOTOGRAPH SHOWING THE PROPOSED HUEY P. LONG BRIDGE WIDENING LOOKING FROM THE WEST BANK TOWARD THE EAST BANK. - Huey P. Long Bridge, Spanning Mississippi River approximately midway between nine & twelve mile points upstream from & west of New Orleans, Jefferson, Jefferson Parish, LA

  16. a New Color Correction Method for Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.

    2015-04-01

    Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.

  17. Comparison of perceptual color spaces for natural image segmentation tasks

    NASA Astrophysics Data System (ADS)

    Correa-Tome, Fernando E.; Sanchez-Yanez, Raul E.; Ayala-Ramirez, Victor

    2011-11-01

    Color image segmentation largely depends on the color space chosen. Furthermore, spaces that show perceptual uniformity seem to outperform others due to their emulation of the human perception of color. We evaluate three perceptual color spaces, CIELAB, CIELUV, and RLAB, in order to determine their contribution to natural image segmentation and to identify the space that obtains the best results over a test set of images. The nonperceptual color space RGB is also included for reference purposes. In order to quantify the quality of resulting segmentations, an empirical discrepancy evaluation methodology is discussed. The Berkeley Segmentation Dataset and Benchmark is used in test series, and two approaches are taken to perform the experiments: supervised pixelwise classification using reference colors, and unsupervised clustering using k-means. A majority filter is used as a postprocessing stage, in order to determine its contribution to the result. Furthermore, a comparison of elapsed times taken by the required transformations is included. The main finding of our study is that the CIELUV color space outperforms the other color spaces in both discriminatory performance and computational speed, for the average case.

  18. Color responses and their adaptation in human superior colliculus and lateral geniculate nucleus.

    PubMed

    Chang, Dorita H F; Hess, Robert F; Mullen, Kathy T

    2016-09-01

    We use an fMRI adaptation paradigm to explore the selectivity of human responses in the lateral geniculate nucleus (LGN) and superior colliculus (SC) to red-green color and achromatic contrast. We measured responses to red-green (RG) and achromatic (ACH) high contrast sinewave counter-phasing rings with and without adaptation, within a block design. The signal for the RG test stimulus was reduced following both RG and ACH adaptation, whereas the signal for the ACH test was unaffected by either adaptor. These results provide compelling evidence that the human LGN and SC have significant capacity for color adaptation. Since in the LGN red-green responses are mediated by P cells, these findings are in contrast to earlier neurophysiological data from non-human primates that have shown weak or no contrast adaptation in the P pathway. Cross-adaptation of the red-green color response by achromatic contrast suggests unselective response adaptation and points to a dual role for P cells in responding to both color and achromatic contrast. We further show that subcortical adaptation is not restricted to the geniculostriate system, but is also present in the superior colliculus (SC), an oculomotor region that until recently, has been thought to be color-blind. Our data show that the human SC not only responds to red-green color contrast, but like the LGN, shows reliable but unselective adaptation. PMID:27150230

  19. Dominant color correlogram descriptor for content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Fierro-Radilla, Atoany; Perez-Daniel, Karina; Nakano-Miyatake, Mariko; Benois, Jenny

    2015-03-01

    Content-based image retrieval (CBIR) has become an interesting and urgent research topic due to the increase of necessity of indexing and classification of multimedia content in large databases. The low level visual descriptors, such as color-based, texture-based and shape-based descriptors, have been used for the CBIR task. In this paper we propose a color-based descriptor which describes well image contents, integrating both global feature provided by dominant color and local features provided by color correlogram. The performance of the proposed descriptor, called Dominant Color Correlogram descriptor (DCCD), is evaluated comparing with some MPEG-7 visual descriptors and other color-based descriptors reported in the literature, using two image datasets with different size and contents. The performance of the proposed descriptor is assessed using three different metrics commonly used in image retrieval task, which are ARP (Average Retrieval Precision), ARR (Average Retrieval Rate) and ANMRR (Average Normalized Modified Retrieval Rank). Also precision-recall curves are provided to show a better performance of the proposed descriptor compared with other color-based descriptors.

  20. Stereoscopic high-speed imaging using additive colors

    NASA Astrophysics Data System (ADS)

    Sankin, Georgy N.; Piech, David; Zhong, Pei

    2012-04-01

    An experimental system for digital stereoscopic imaging produced by using a high-speed color camera is described. Two bright-field image projections of a three-dimensional object are captured utilizing additive-color backlighting (blue and red). The two images are simultaneously combined on a two-dimensional image sensor using a set of dichromatic mirrors, and stored for off-line separation of each projection. This method has been demonstrated in analyzing cavitation bubble dynamics near boundaries. This technique may be useful for flow visualization and in machine vision applications.

  1. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  2. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  3. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  4. Lensfree color imaging on a nanostructured chip using compressive decoding

    PubMed Central

    Khademhosseinieh, Bahar; Biener, Gabriel; Sencan, Ikbal; Ozcan, Aydogan

    2010-01-01

    We demonstrate subpixel level color imaging capability on a lensfree incoherent on-chip microscopy platform. By using a nanostructured substrate, the incoherent emission from the object plane is modulated to create a unique far-field diffraction pattern corresponding to each point at the object plane. These lensfree diffraction patterns are then sampled in the far-field using a color sensor-array, where the pixels have three different types of color filters at red, green, and blue (RGB) wavelengths. The recorded RGB diffraction patterns (for each point on the structured substrate) form a basis that can be used to rapidly reconstruct any arbitrary multicolor incoherent object distribution at subpixel resolution, using a compressive sampling algorithm. This lensfree computational imaging platform could be quite useful to create a compact fluorescent on-chip microscope that has color imaging capability. PMID:21173866

  5. Full-color holographic 3D imaging system using color optical scanning holography

    NASA Astrophysics Data System (ADS)

    Kim, Hayan; Kim, You Seok; Kim, Taegeun

    2016-06-01

    We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.

  6. Color calculations for and perceptual assessment of computer graphic images

    SciTech Connect

    Meyer, G.W.

    1986-01-01

    Realistic image synthesis involves the modelling of an environment in accordance with the laws of physics and the production of a final simulation that is perceptually acceptable. To be considered a scientific endeavor, synthetic image generation should also include the final step of experimental verification. This thesis concentrates on the color calculations that are inherent in the production of the final simulation and on the perceptual assessment of the computer graphic images that result. The fundamental spectral sensitivity functions that are active in the human visual system are introduced and are used to address color-blindness issues in computer graphics. A digitally controlled color television monitor is employed to successfully implement both the Farnsworth Munsell 100 hues test and a new color vision test that yields more accurate diagnoses. Images that simulate color blind vision are synthesized and are used to evaluate color scales for data display. Gaussian quadrature is used with a set of opponent fundamental to select the wavelengths at which to perform synthetic image generation.

  7. Color transformation for the compression of CMYK images

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.

    1999-12-01

    A CMYK image is often viewed as a large amount of device- dependent data ready to be printed. In several circumstances, CMYK data needs to be compressed, but the conversion to and from device-independent spaces is imprecise at best. In this paper, with the goal of compressing CMYK images, color space transformations were studied. In order to have a practical importance we developed a new transformation to a YYCC color space, which is device-independent and image-independent, i.e. a simple linear transformation between device-dependent color spaces. The transformation from CMYK to YYCC was studied extensively in image compression. For that a distortion measure that would account for both device-dependence and spatial visual sensitivity has been developed. It is shown that transformation to YYCC consistently outperforms the transformation to other device-dependent 4D color spaces such as YCbCrK, while being competitive with the image- dependent KLT-based approach. Other interesting conclusions were also drawn from the experiments, among them the fact that color transformations are not always advantageous over independent compression of CMYK color planes and the fact that chrominance subsampling is rarely advantageous.

  8. Local adaptation for body color in Drosophila americana

    PubMed Central

    Wittkopp, P J; Smith-Winberry, G; Arnold, L L; Thompson, E M; Cooley, A M; Yuan, D C; Song, Q; McAllister, B F

    2011-01-01

    Pigmentation is one of the most variable traits within and between Drosophila species. Much of this diversity appears to be adaptive, with environmental factors often invoked as selective forces. Here, we describe the geographic structure of pigmentation in Drosophila americana and evaluate the hypothesis that it is a locally adapted trait. Body pigmentation was quantified using digital images and spectrometry in up to 10 flies from each of 93 isofemale lines collected from 17 locations across the United States and found to correlate most strongly with longitude. Sequence variation at putatively neutral loci showed no evidence of population structure and was inconsistent with an isolation-by-distance model, suggesting that the pigmentation cline exists despite extensive gene flow throughout the species range, and is most likely the product of natural selection. In all other Drosophila species examined to date, dark pigmentation is associated with arid habitats; however, in D. americana, the darkest flies were collected from the most humid regions. To investigate this relationship further, we examined desiccation resistance attributable to an allele that darkens pigmentation in D. americana. We found no significant effect of pigmentation on desiccation resistance in this experiment, suggesting that pigmentation and desiccation resistance are not unequivocally linked in all Drosophila species. PMID:20606690

  9. Weighted color and texture sample selection for image matting.

    PubMed

    Varnousfaderani, Ehsan Shahrian; Rajan, Deepu

    2013-11-01

    Color sampling based matting methods find the best known samples for foreground and background colors of unknown pixels. Such methods do not perform well if there is an overlap in the color distribution of foreground and background regions because color cannot distinguish between these regions and hence, the selected samples cannot reliably estimate the matte. Furthermore, current sampling based matting methods choose samples that are located around the boundaries of foreground and background regions. In this paper, we overcome these two problems. First, we propose texture as a feature that can complement color to improve matting by discriminating between known regions with similar colors. The contribution of texture and color is automatically estimated by analyzing the content of the image. Second, we combine local sampling with a global sampling scheme that prevents true foreground or background samples to be missed during the sample collection stage. An objective function containing color and texture components is optimized to choose the best foreground and background pair among a set of candidate pairs. Experiments are carried out on a benchmark data set and an independent evaluation of the results shows that the proposed method is ranked first among all other image matting methods. PMID:23807448

  10. Multiple color-image authentication system using HSI color space and QR decomposition in gyrator domains

    NASA Astrophysics Data System (ADS)

    Rafiq Abuturab, Muhammad

    2016-06-01

    A new multiple color-image authentication system based on HSI (Hue-Saturation-Intensity) color space and QR decomposition in gyrator domains is proposed. In this scheme, original color images are converted from RGB (Red-Green-Blue) color spaces to HSI color spaces, divided into their H, S, and I components, and then obtained corresponding phase-encoded components. All the phase-encoded H, S, and I components are individually multiplied, and then modulated by random phase functions. The modulated H, S, and I components are convoluted into a single gray image with asymmetric cryptosystem. The resulting image is segregated into Q and R parts by QR decomposition. Finally, they are independently gyrator transformed to get their encoded parts. The encoded Q and R parts should be gathered without missing anyone for decryption. The angles of gyrator transform afford sensitive keys. The protocol based on QR decomposition of encoded matrix and getting back decoded matrix after multiplying matrices Q and R, enhances the security level. The random phase keys, individual phase keys, and asymmetric phase keys provide high robustness to the cryptosystem. Numerical simulation results demonstrate that this scheme is the superior than the existing techniques.

  11. False-color composite image of Raco, Michigan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This image is a false color composite of Raco, Michigan, centered at 46.39 north latitude and 84.88 east longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area shown is approximately 20 kilometers by 50 kilometers. Raco is located at the eastern end of Michigan's upper peninsula, west of Sault Ste. Marie and south of Whitefish Bay on Lake Superior. In this color representation, darker areas in the image are smooth surfaces such as frozen lakes and other non-forested areas. The colors are related to the types of trees and the brightness is related to the amount of plant material covering the surface, called forest biomass. The Jet Propulsion Laboratory alternative photo number is P-43882.

  12. Color image encoding in DOST domain using DWT and SVD

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Agrawal, Smita

    2015-12-01

    In this paper, a new color image encoding and decoding technique based on Discrete Orthonormal Stockwell Transform (DOST) using Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD) is proposed. The images are encrypted using bands of DOST and wavelets along with singular values of wavelet subbands. We have used the number of bands of DOST, values and arrangement of some predefined parameters using singular values of all wavelet subbands and arrangement of wavelet subbands as encoding and decoding keys in all three color planes. To ensure the correct decoding of the encoded image, it is necessary to use all the keys in correct order along with their exact values. The comparison of our technique with one of the recently proposed techniques and experimental results is used to analyze the effectiveness of the proposed technique. The proposed technique can be used for transmitting a color image more securely and efficiently through both secured and unsecured communication network.

  13. Color image reproduction based on multispectral and multiprimary imaging: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Teraji, Taishi; Ohsawa, Kenro; Uchiyama, Toshio; Motomura, Hideto; Murakami, Yuri; Ohyama, Nagaaki

    2001-12-01

    Multispectral imaging is significant technology for the acquisition and display of accurate color information. Natural color reproduction under arbitrary illumination becomes possible using spectral information of both image and illumination light. In addition, multiprimary color display, i.e., using more than three primary colors, has been also developed for the reproduction of expanded color gamut, and for discounting observer metamerism. In this paper, we present the concept for the multispectral data interchange for natural color reproduction, and the experimental results using 16-band multispectral camera and 6-primary color display. In the experiment, the accuracy of color reproduction is evaluated in CIE (Delta) Ea*b* for both image capture and display systems. The average and maximum (Delta) Ea*b* = 1.0 and 2.1 in 16-band mutispectral camera system, using Macbeth 24 color patches. In the six-primary color projection display, average and maximum (Delta) Ea*b* = 1.3 and 2.7 with 30 test colors inside the display gamut. Moreover, the color reproduction results with different spectral distributions but same CIE tristimulus value are visually compared, and it is confirmed that the 6-primary display gives improved agreement between the original and reproduced colors.

  14. Visual cryptography for JPEG color images

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.

    2004-10-01

    There have been a large number of methods proposed for encrypting images by shared key encryption mechanisms. All the existing techniques are applicable to primarily non-compressed images. However, most imaging applications including digital photography, archiving, and internet communications nowadays use images in the JPEG domain. Application of the existing shared key cryptographic schemes for these images requires conversion back into spatial domain. In this paper we propose a shared key algorithm that works directly in the JPEG domain, thus enabling shared key image encryption for a variety of applications. The scheme directly works on the quantized DCT coefficient domain and the resulting noise-like shares are also stored in the JPEG format. The decryption process is lossless. Our experiments indicate that each share image is approximately the same size as the original JPEG retaining the storage advantage provided by JPEG.

  15. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  16. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  17. Peripheral visual response time to colored stimuli imaged on the horizontal meridian

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Gross, M. M.; Nylen, D.; Dawson, L. M.

    1974-01-01

    Two male observers were administered a binocular visual response time task to small (45 min arc), flashed, photopic stimuli at four dominant wavelengths (632 nm red; 583 nm yellow; 526 nm green; 464 nm blue) imaged across the horizontal retinal meridian. The stimuli were imaged at 10 deg arc intervals from 80 deg left to 90 deg right of fixation. Testing followed either prior light adaptation or prior dark adaptation. Results indicated that mean response time (RT) varies with stimulus color. RT is faster to yellow than to blue and green and slowest to red. In general, mean RT was found to increase from fovea to periphery for all four colors, with the curve for red stimuli exhibiting the most rapid positive acceleration with increasing angular eccentricity from the fovea. The shape of the RT distribution across the retina was also found to depend upon the state of light or dark adaptation. The findings are related to previous RT research and are discussed in terms of optimizing the color and position of colored displays on instrument panels.

  18. Color impact in visual attention deployment considering emotional images

    NASA Astrophysics Data System (ADS)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  19. Perceptual quality metric of color quantization errors on still images

    NASA Astrophysics Data System (ADS)

    Pefferkorn, Stephane; Blin, Jean-Louis

    1998-07-01

    A new metric for the assessment of color image coding quality is presented in this paper. Two models of chromatic and achromatic error visibility have been investigated, incorporating many aspects of human vision and color perception. The achromatic model accounts for both retinal and cortical phenomena such as visual sensitivity to spatial contrast and orientation. The chromatic metric is based on a multi-channel model of human color vision that is parameterized for video coding applications using psychophysical experiments, assuming that perception of color quantization errors can be assimilated to perception of supra-threshold local color-differences. The final metric is a merging of the chromatic model and the achromatic model which accounts for phenomenon as masking. The metric is tested on 6 real images at 5 quality levels using subjective assessments. The high correlation between objective and subjective scores shows that the described metric accurately rates the rendition of important features of the image such as color contours and textures.

  20. Image visualization based on MPEG-7 color descriptors

    NASA Astrophysics Data System (ADS)

    Meiers, Thomas; Czernoch-Peters, H.; Ihlenburg, L.; Sikora, Thomas

    2000-05-01

    In this paper we address the user-navigation through large volumes of image data. A similarity-measure based on MPEG-7 color histograms is introduced and Multidimensional Scaling concepts are employed to display images in two dimensions according to their mutual similarities. With such a view the user can easily see relations and color similarity between images and understand the structure of the data base. In order to cope with large volumes of images a modified version of k-means clustering technique is introduced which identifies representative image samples for each cluster. Representative images (up to 100) are then displayed in two dimensions using MDS structuring. The modified clustering technique proposed produces a hierarchical structure of clusters--similar to street maps with various resolutions of details. The user can zoom into various cluster levels to obtain more or less details if required. The results obtained verify the attractiveness of the approach for navigation and retrieval applications.

  1. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  2. An Effective and Fast Hybrid Framework for Color Image Retrieval

    NASA Astrophysics Data System (ADS)

    Walia, Ekta; Vesal, Sulaiman; Pal, Aman

    2014-11-01

    This paper presents a novel, fast and effective hybrid framework for color image retrieval through combination of all the low level features, which gives higher retrieval accuracy than other such systems. The color moment (CMs), angular radial transform descriptor and edge histogram descriptor (EHD) features are exploited to capture color, shape and texture information respectively. A multistage framework is designed to imitate human perception so that in the first stage, images are retrieved based on their CMs and then the shape and texture descriptors are utilized to identify the closest matches in the second stage. The scheme employs division of images into non-overlapping regions for effective computation of CMs and EHD features. To demonstrate the efficacy of this framework, experiments are conducted on Wang's, VisTex and OT-Scene databases. Inspite of its multistage design, the system is observed to be faster than other hybrid approaches.

  3. Digital watermarking for color images in hue-saturation-value color space

    NASA Astrophysics Data System (ADS)

    Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.

    2014-05-01

    This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.

  4. Web Services for Dynamic Coloring of UAVSAR Images

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Pierce, Marlon; Donnellan, Andrea; Parker, Jay

    2015-08-01

    QuakeSim has implemented a service-based Geographic Information System to enable users to access large amounts of Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) data through an online interface. The QuakeSim Interferometric Synthetic Aperture Radar (InSAR) profile tool calculates radar-observed displacement (from an unwrapped interferogram product) along user-specified lines. Pre-rendered thumbnails with InSAR fringe patterns are used to display interferogram and unwrapped phase images on a Google Map in the InSAR profile tool. One challenge with this tool lies in the user visually identifying regions of interest when drawing the profile line. This requires that the user correctly interpret the InSAR imagery, which currently uses fringe patterns. The mapping between pixel color and pixel value is not a one-to-one relationship from the InSAR fringe pattern, and it causes difficulty in understanding general displacement information for QuakeSim users. The goal of this work is to generate color maps that directly reflect the pixel values (displacement) as an addition to the pre-rendered images. Because of an extremely uneven distribution of pixel values on an InSAR image, a histogram-based, nonlinear color template generation algorithm is currently under development. A web service enables on-the-fly coloring of UAVSAR images with dynamically generated color templates.

  5. SCID: full reference spatial color image quality metric

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Herbin, M.; Zagrouba, E.

    2009-01-01

    The most used full reference image quality assessments are error-based methods. Thus, these measures are performed by pixel based difference metrics like Delta E ( E), MSE, PSNR, etc. Therefore, a local fidelity of the color is defined. However, these metrics does not correlate well with the perceived image quality. Indeed, they omit the properties of the HVS. Thus, they cannot be a reliable predictor of the perceived visual quality. All this metrics compute the differences pixel to pixel. Therefore, a local fidelity of the color is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we present a novel full reference color metric that is based on characteristics of the human visual system by considering the notion of adjacency. This metric called SCID for Spatial Color Image Difference, is more perceptually correlated than other color differences such as Delta E. The suggested full reference metric is generic and independent of image distortion type. It can be used in different application such as: compression, restoration, etc.

  6. Restoration of cloud contaminated ocean color images using numerical simulation

    NASA Astrophysics Data System (ADS)

    Yang, Xuefei; Mao, Zhihua; Chen, Jianyu; Huang, Haiqing

    2015-10-01

    It is very hard to access cloud-free remote sensing data, especially for the ocean color images. A cloud removal approach from ocean color satellite images based on numerical modeling is introduced. The approach removes cloud-contaminated portions and then reconstructs the missing data utilizing model simulated values. The basic idea is to create the relationship between cloud-free patches and cloud-contaminated patches under the assumption that both of them are influenced by the same marine hydrodynamic conditions. Firstly, we find cloud-free GOCI (the Geostationary Ocean Color Imager) retrieved suspended sediment concentrations (SSC) in the East China Sea before and after the time of cloudy images, which are set as initial field and validation data for numerical model, respectively. Secondly, a sediment transport model based on COHERENS, a coupled hydrodynamic-ecological ocean model for regional and shelf seas, is configured. The comparison between simulated results and validation images show that the sediment transport model can be used to simulate actual sediment distribution and transport in the East China Sea. Then, the simulated SSCs corresponding to the cloudy portions are used to remove the cloud and replace the missing values. Finally, the accuracy assessments of the results are carried out by visual and statistical analysis. The experimental results demonstrate that the proposed method can effectively remove cloud from GOCI images and reconstruct the missing data, which is a new way to enhance the effectiveness and availability of ocean color data, and is of great practical significance.

  7. Implementation of a multi-spectral color imaging device without color filter array

    NASA Astrophysics Data System (ADS)

    Langfelder, G.; Longoni, A. F.; Zaraga, F.

    2011-01-01

    In this work the use of the Transverse Field Detector (TFD) as a device for multispectral image acquisition is proposed. The TFD is a color imaging pixel capable of color reconstruction without color filters. Its basic working principle is based on the generation of a suitable electric field configuration inside a Silicon depleted region by means of biasing voltages applied to surface contacts. With respect to previously proposed methods for performing multispectral capture, the TFD has a unique characteristic of electrically tunable spectral responses. This feature allows capturing an image with different sets of spectral responses (RGB, R'G'B', and so on) simply by tuning the device biasing voltages in multiple captures. In this way no hardware complexity (no external filter wheels or varying sources) is added with respect to a colorimetric device. The estimation of the spectral reflectance of the area imaged by a TFD pixel is based in this work on a linear combination of six eigenfunctions. It is shown that a spectral reconstruction can be obtained either (1) using two subsequent image captures that generate six TFD spectral responses or (2) using a new asymmetric biasing scheme, which allows the implementation of five spectral responses for each TFD pixel site in a single configuration, definitely allowing one-shot multispectral imaging.

  8. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  9. Camera lens adapter magnifies image

    NASA Technical Reports Server (NTRS)

    Moffitt, F. L.

    1967-01-01

    Polaroid Land camera with an illuminated 7-power magnifier adapted to the lens, photographs weld flaws. The flaws are located by inspection with a 10-power magnifying glass and then photographed with this device, thus providing immediate pictorial data for use in remedial procedures.

  10. Two-color ghost imaging with enhanced angular resolving power

    SciTech Connect

    Karmakar, Sanjit; Shih, Yanhua

    2010-03-15

    This article reports an experimental demonstration on nondegenerate, two-color, biphoton ghost imaging which reproduced a ghost image with enhanced angular resolving power by means of a greater field of view compared with that of classical imaging. With the same imaging magnification, the enhanced angular resolving power and field of view compared with those of classical imaging are 1.25:1 and 1.16:1, respectively. The enhancement of angular resolving power depends on the ratio between the idler and the signal photon frequencies, and the enhancement of the field of view depends mainly on the same ratio and also on the distances of the object plane and the imaging lens from the two-photon source. This article also reports the possibility of reproducing a ghost image with the enhancement of the angular resolving power by means of a greater imaging amplification compared with that of classical imaging.

  11. Digital images for eternity: color microfilm as archival medium

    NASA Astrophysics Data System (ADS)

    Normand, C.; Gschwind, R.; Fornaro, P.

    2007-01-01

    In the archiving and museum communities, the long-term preservation of artworks has traditionally been guaranteed by making duplicates of the original. For photographic reproductions, digital imaging devices have now become standard, providing better quality control and lower costs than film photography. However, due to the very short life cycle of digital data, losses are unavoidable without repetitive data migrations to new file formats and storage media. We present a solution for the long-term archiving of digital images on color microfilm (Ilfochrome® Micrographic). This extremely stable and high-resolution medium, combined with the use of a novel laser film recorder is particularly well suited for this task. Due to intrinsic limitations of the film, colorimetric reproductions of the originals are not always achievable. The microfilm must be first considered as an information carrier and not primarily as an imaging medium. Color transformations taking into account the film characteristics and possible degradations of the medium due to aging are investigated. An approach making use of readily available color management tools is presented which assures the recovery of the original colors after re-digitization. An extension of this project considering the direct recording of digital information as color bit-code on the film is also introduced.

  12. Spatial-frequency-contingent color aftereffects: adaptation with one-dimensional stimuli.

    PubMed

    Day, R H; Webster, W R; Gillies, O; Crassini, B

    1992-01-01

    The McCollough effect was shown to be spatial-frequency selective by Lovegrove and Over (1972) after adaptation with vertical colored square-wave gratings separated by 1 octave. Adaptation with slide-presented red and green vertical square-wave gratings separated by 1 octave failed to produce contingent color aftereffects (CAEs). However, when each of these gratings was adapted alone, strong CAEs were produced. Adaptation with vertical colored sine-wave gratings separated by 1 octave also failed to produce CAEs, but strong effects were produced by adaptation with each grating alone. By varying the spatial frequency of the test sine wave, CAEs were found to be tuned for spatial frequency at 2.85 octaves after adaptation of 4 cycles per degree (cpd) and at 2.30 octaves after adaptation of 8 cpd. Adaptation of both vertical and horizontal sine-wave gratings produced strong CAEs, with bandwidths ranging from 1.96 to 2.90 octaves and with lower adapting contrast producing weaker CAEs. These results indicate that the McCollough effect is more broadly tuned for spatial frequency than are simple adaptation effects. PMID:1549425

  13. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  14. Demonstrating Hormonal Control of Vertebrate Adaptive Color Changes in Vitro.

    ERIC Educational Resources Information Center

    Hadley, Mac E.; Younggren, Newell A.

    1980-01-01

    Presented is a short discussion of factors causing color changes in animals. Also described is an activity which may be used to demonstrate the response of amphibian skin to a melanophore stimulating hormone in high school or college biology classes. (PEB)

  15. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  16. Predicting beef tenderness using color and multispectral image texture features.

    PubMed

    Sun, X; Chen, K J; Maddock-Carlin, K R; Anderson, V L; Lepper, A N; Schwartz, C A; Keller, W L; Ilse, B R; Magolski, J D; Berg, E P

    2012-12-01

    The objective of this study was to investigate the usefulness of raw meat surface characteristics (texture) in predicting cooked beef tenderness. Color and multispectral texture features, including 4 different wavelengths and 217 image texture features, were extracted from 2 laboratory-based multispectral camera imaging systems. Steaks were segregated into tough and tender classification groups based on Warner-Bratzler shear force. The texture features were submitted to STEPWISE multiple regression and support vector machine (SVM) analyses to establish prediction models for beef tenderness. A subsample (80%) of tender or tough classified steaks were used to train models which were then validated on the remaining (20%) test steaks. For color images, the SVM model correctly identified tender steaks with 100% accurately while the STEPWISE equation identified 94.9% of the tender steaks correctly. For multispectral images, the SVM model predicted 91% and STEPWISE predicted 87% average accuracy of beef tender. PMID:22647652

  17. Color Image Classification Using Block Matching and Learning

    NASA Astrophysics Data System (ADS)

    Kondo, Kazuki; Hotta, Seiji

    In this paper, we propose block matching and learning for color image classification. In our method, training images are partitioned into small blocks. Given a test image, it is also partitioned into small blocks, and mean-blocks corresponding to each test block are calculated with neighbor training blocks. Our method classifies a test image into the class that has the shortest total sum of distances between mean blocks and test ones. We also propose a learning method for reducing memory requirement. Experimental results show that our classification outperforms other classifiers such as support vector machine with bag of keypoints.

  18. Multi-color magnetic particle imaging for cardiovascular interventions

    NASA Astrophysics Data System (ADS)

    Haegele, Julian; Vaalma, Sarah; Panagiotopoulos, Nikolaos; Barkhausen, Jörg; Vogt, Florian M.; Borgert, Jörn; Rahmer, Jürgen

    2016-08-01

    Magnetic particle imaging (MPI) uses magnetic fields to visualize the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). Guidance of cardiovascular interventions is seen as one possible application of MPI. To safely guide interventions, the vessel lumen as well as all required interventional devices have to be visualized and be discernible from each other. Until now, different tracer concentrations were used for discerning devices from blood in MPI, because only one type of SPIO could be imaged at a time. Recently, it was shown for 3D MPI that it is possible to separate different signal sources in one volume of interest, i.e. to visualize and discern different SPIOs or different binding states of the same SPIO. The approach was termed multi-color MPI. In this work, the use of multi-color MPI for differentiation of a SPIO coated guide wire (Terumo Radifocus 0.035″) from the lumen of a vessel phantom filled with diluted Resovist is demonstrated. This is achieved by recording dedicated system functions of the coating material containing solid Resovist and of liquid Resovist, which allows separation of their respective signal in the image reconstruction process. Assigning a color to the different signal sources results in a differentiation of guide wire and vessel phantom lumen into colored images.

  19. Multi-color magnetic particle imaging for cardiovascular interventions.

    PubMed

    Haegele, Julian; Vaalma, Sarah; Panagiotopoulos, Nikolaos; Barkhausen, Jörg; Vogt, Florian M; Borgert, Jörn; Rahmer, Jürgen

    2016-08-21

    Magnetic particle imaging (MPI) uses magnetic fields to visualize the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). Guidance of cardiovascular interventions is seen as one possible application of MPI. To safely guide interventions, the vessel lumen as well as all required interventional devices have to be visualized and be discernible from each other. Until now, different tracer concentrations were used for discerning devices from blood in MPI, because only one type of SPIO could be imaged at a time. Recently, it was shown for 3D MPI that it is possible to separate different signal sources in one volume of interest, i.e. to visualize and discern different SPIOs or different binding states of the same SPIO. The approach was termed multi-color MPI. In this work, the use of multi-color MPI for differentiation of a SPIO coated guide wire (Terumo Radifocus 0.035″) from the lumen of a vessel phantom filled with diluted Resovist is demonstrated. This is achieved by recording dedicated system functions of the coating material containing solid Resovist and of liquid Resovist, which allows separation of their respective signal in the image reconstruction process. Assigning a color to the different signal sources results in a differentiation of guide wire and vessel phantom lumen into colored images. PMID:27476675

  20. Hyperspectral imaging using RGB color for foodborne pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance sp...

  1. Color Image of Phoenix Heat Shield and Bounce Mark

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This shows a color image from Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment camera. It shows the Phoenix heat shield and bounce mark on the Mars surface.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  2. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  3. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  4. Image super-resolution based on image adaptive decomposition

    NASA Astrophysics Data System (ADS)

    Xie, Qiwei; Wang, Haiyan; Shen, Lijun; Chen, Xi; Han, Hua

    2011-11-01

    In this paper we propose an image super-resolution algorithm based on Gaussian Mixture Model (GMM) and a new adaptive image decomposition algorithm. The new image decomposition algorithm uses local extreme of image to extract the cartoon and oscillating part of image. In this paper, we first decompose an image into oscillating and piecewise smooth (cartoon) parts, then enlarge the cartoon part with interpolation. Because GMM accurately characterizes the oscillating part, we specify it as the prior distribution and then formulate the image super-resolution problem as a constrained optimization problem to acquire the enlarged texture part and finally we obtain a fine result.

  5. Offset-sparsity decomposition for automated enhancement of color microscopic image of stained specimen in histopathology

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Hadžija, Marijana Popović; Hadžija, Mirko; Aralica, Gorana

    2015-07-01

    We propose an offset-sparsity decomposition method for the enhancement of a color microscopic image of a stained specimen. The method decomposes vectorized spectral images into offset terms and sparse terms. A sparse term represents an enhanced image, and an offset term represents a "shadow." The related optimization problem is solved by computational improvement of the accelerated proximal gradient method used initially to solve the related rank-sparsity decomposition problem. Removal of an image-adapted color offset yields an enhanced image with improved colorimetric differences among the histological structures. This is verified by a no-reference colorfulness measure estimated from 35 specimens of the human liver, 1 specimen of the mouse liver stained with hematoxylin and eosin, 6 specimens of the mouse liver stained with Sudan III, and 3 specimens of the human liver stained with the anti-CD34 monoclonal antibody. The colorimetric difference improves on average by 43.86% with a 99% confidence interval (CI) of [35.35%, 51.62%]. Furthermore, according to the mean opinion score, estimated on the basis of the evaluations of five pathologists, images enhanced by the proposed method exhibit an average quality improvement of 16.60% with a 99% CI of [10.46%, 22.73%].

  6. Image space adaptive volume rendering

    NASA Astrophysics Data System (ADS)

    Corcoran, Andrew; Dingliana, John

    2012-01-01

    We present a technique for interactive direct volume rendering which provides adaptive sampling at a reduced memory requirement compared to traditional methods. Our technique exploits frame to frame coherence to quickly generate a two-dimensional importance map of the volume which guides sampling rate optimisation and allows us to provide interactive frame rates for user navigation and transfer function changes. In addition our ray casting shader detects any inconsistencies in our two-dimensional map and corrects them on the fly to ensure correct classification of important areas of the volume.

  7. Luminosity and contrast normalization in color retinal images based on standard reference image

    NASA Astrophysics Data System (ADS)

    S. Varnousfaderani, Ehsan; Yousefi, Siamak; Belghith, Akram; Goldbaum, Michael H.

    2016-03-01

    Color retinal images are used manually or automatically for diagnosis and monitoring progression of a retinal diseases. Color retinal images have large luminosity and contrast variability within and across images due to the large natural variations in retinal pigmentation and complex imaging setups. The quality of retinal images may affect the performance of automatic screening tools therefore different normalization methods are developed to uniform data before applying any further analysis or processing. In this paper we propose a new reliable method to remove non-uniform illumination in retinal images and improve their contrast based on contrast of the reference image. The non-uniform illumination is removed by normalizing luminance image using local mean and standard deviation. Then the contrast is enhanced by shifting histograms of uniform illuminated retinal image toward histograms of the reference image to have similar histogram peaks. This process improve the contrast without changing inter correlation of pixels in different color channels. In compliance with the way humans perceive color, the uniform color space of LUV is used for normalization. The proposed method is widely tested on large dataset of retinal images with present of different pathologies such as Exudate, Lesion, Hemorrhages and Cotton-Wool and in different illumination conditions and imaging setups. Results shows that proposed method successfully equalize illumination and enhances contrast of retinal images without adding any extra artifacts.

  8. Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.

    2016-02-01

    Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.

  9. Incorporating Adaptive Local Information Into Fuzzy Clustering for Image Segmentation.

    PubMed

    Liu, Guoying; Zhang, Yun; Wang, Aimin

    2015-11-01

    Fuzzy c-means (FCM) clustering with spatial constraints has attracted great attention in the field of image segmentation. However, most of the popular techniques fail to resolve misclassification problems due to the inaccuracy of their spatial models. This paper presents a new unsupervised FCM-based image segmentation method by paying closer attention to the selection of local information. In this method, region-level local information is incorporated into the fuzzy clustering procedure to adaptively control the range and strength of interactive pixels. First, a novel dissimilarity function is established by combining region-based and pixel-based distance functions together, in order to enhance the relationship between pixels which have similar local characteristics. Second, a novel prior probability function is developed by integrating the differences between neighboring regions into the mean template of the fuzzy membership function, which adaptively selects local spatial constraints by a tradeoff weight depending upon whether a pixel belongs to a homogeneous region or not. Through incorporating region-based information into the spatial constraints, the proposed method strengthens the interactions between pixels within the same region and prevents over smoothing across region boundaries. Experimental results over synthetic noise images, natural color images, and synthetic aperture radar images show that the proposed method achieves more accurate segmentation results, compared with five state-of-the-art image segmentation methods. PMID:26186787

  10. Use of discrete chromatic space to tune the image tone in a color image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  11. Adaptive optics imaging of the retina

    PubMed Central

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified. PMID:24492503

  12. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity. PMID:26316129

  13. Restoration of color images by multichannel Kalman filtering

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Chin, Roland T.

    1991-01-01

    A Kalman filter for optimal restoration of multichannel images is presented. This filter is derived using a multichannel semicausal image model that includes between-channel degradation. Both stationary and nonstationary image models are developed. This filter is implemented in the Fourier domain and computation is reduced from O(Lambda3N3M4) to O(Lambda3N3M2) for an M x M N-channel image with degradation length Lambda. Color (red, green, and blue (RGB)) images are used as examples of multichannel images, and restoration in the RGB and YIQ domains is investigated. Simulations are presented in which the effectiveness of this filter is tested for different types of degradation and different image model estimates.

  14. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  15. Color image encryption based on joint fractional Fourier transform correlator

    NASA Astrophysics Data System (ADS)

    Lu, Ding; Jin, Weimin

    2011-06-01

    In this paper, an optical color image encryption/decryption technology based on joint fractional Fourier transform correlator and double random phase encoding (DRPE) is developed. In this method, the joint fractional power spectrum of the image to be encrypted and the key codes is recorded as the encrypted data. Different from the case with classical DRPE, the same key code was used both in the encryption and decryption. The security of the system is enhanced because of the fractional order as a new added key. This method takes full advantage of the parallel processing features of the optical system, and could optically realize single-channel color image encryption. The experimental results indicate that the new method is feasible.

  16. Power Doppler imaging: clinical experience and correlation with color Doppler US and other imaging modalities.

    PubMed

    Hamper, U M; DeJong, M R; Caskey, C I; Sheth, S

    1997-01-01

    Power Doppler imaging has recently gained attention as an additional color flow imaging technique that overcomes some of the limitations of conventional color Doppler ultrasound (US). Limitations of conventional color Doppler US include angle dependence, aliasing, and difficulty in separating background noise from true flow in slow-flow states. Owing to its increased sensitivity to flow, power Doppler sonography is valuable in low-flow states and when optimal Doppler angles cannot be obtained. Longer segments of vessels and more individual vessels can be visualized with power Doppler US than with conventional color Doppler sonography. Power Doppler sonography increases diagnostic confidence when verifying or excluding testicular or ovarian torsion and confirming thrombosis or occlusion of vessels. Power Doppler sonography also improves evaluation of parenchymal flow and decreases examination times in technically challenging cases. Power Doppler US is a useful adjunct to mean-frequency color Doppler sonography, especially when color Doppler US cannot adequately obtain or display diagnostic information. PMID:9084086

  17. Multifocus color image fusion based on quaternion curvelet transform.

    PubMed

    Guo, Liqiang; Dai, Ming; Zhu, Ming

    2012-08-13

    Multifocus color image fusion is an active research area in image processing, and many fusion algorithms have been developed. However, the existing techniques can hardly deal with the problem of image blur. This study present a novel fusion approach that integrates the quaternion with traditional curvelet transform to overcome the above disadvantage. The proposed method uses a multiresolution analysis procedure based on the quaternion curvelet transform. Experimental results show that the proposed method is promising, and it does significantly improve the fusion quality compared to the existing fusion methods. PMID:23038524

  18. Microscale halftone color image analysis: perspective of spectral color prediction modeling

    NASA Astrophysics Data System (ADS)

    Rahaman, G. M. Atiqur; Norberg, Ole; Edström, Per

    2014-01-01

    A method has been proposed, whereby k-means clustering technique is applied to segment microscale single color halftone image into three components—solid ink, ink/paper mixed area and unprinted paper. The method has been evaluated using impact (offset) and non-impact (electro-photography) based single color prints halftoned by amplitude modulation (AM) and frequency modulation (FM) technique. The print samples have also included a range of variations in paper substrates. The colors of segmented regions have been analyzed in CIELAB color space to reveal the variations, in particular those present in mixed regions. The statistics of intensity distribution in the segmented areas have been utilized to derive expressions that can be used to calculate simple thresholds. However, the segmented results have been employed to study dot gain in comparison with traditional estimation technique using Murray-Davies formula. The performance of halftone reflectance prediction by spectral Murray-Davies model has been reported using estimated and measured parameters. Finally, a general idea has been proposed to expand the classical Murray-Davies model based on experimetal observations. Hence, the present study primarily presents the outcome of experimental efforts to characterize halftone print media interactions in respect to the color prediction models. Currently, most regression-based color prediction models rely on mathematical optimization to estimate the parameters using measured average reflectance of a large area compared to the dot size. While this general approach has been accepted as a useful tool, experimental investigations can enhance understanding of the physical processes and facilitate exploration of new modeling strategies. Furthermore, reported findings may help reduce the required number of samples that are printed and measured in the process of multichannel printer characterization and calibration.

  19. Munsell color analysis of Landsat color-ratio-composite images of limonitic areas in southwest New Mexico

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1985-01-01

    The causes of color variations in the green areas on Landsat 4/5-4/6-6/7 (red-blue-green) color-ratio-composite (CRC) images, defined as limonitic areas, were investigated by analyzing the CRC images of the Lordsburg, New Mexico area. The red-blue-green additive color system was mathematically transformed into the cylindrical Munsell color coordinates (hue, saturation, and value), and selected areas were digitally analyzed for color variation. The obtained precise color characteristics were then correlated with properties of surface material. The amount of limonite (L) visible to the sensor was found to be the primary cause of the observed color differences. The visible L is, is turn, affected by the amount of L on the material's surface and by within-pixel mixing of limonitic and nonlimonitic materials. The secondary cause of variation was vegetation density, which shifted CRC hues towards yellow-green, decreased saturation, and increased value.

  20. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  1. Adaptive inter color residual prediction for efficient red-green-blue intra coding

    NASA Astrophysics Data System (ADS)

    Jeong, Jinwoo; Choe, Yoonsik; Kim, Yong-Goo

    2011-07-01

    Intra coding of an RGB video is important to many high fidelity multimedia applications because video acquisition is mostly done in RGB space, and the coding of decorrelated color video loses its virtue in high quality ranges. In order to improve the compression performance of an RGB video, this paper proposes an inter color prediction using adaptive weights. For making full use of spatial, as well as inter color correlation of an RGB video, the proposed scheme is based on a residual prediction approach, and thus the incorporated prediction is performed on the transformed frequency components of spatially predicted residual data of each color plane. With the aid of efficient prediction employing frequency domain inter color residual correlation, the proposed scheme achieves up to 24.3% of bitrate reduction, compared to the common mode of H.264/AVC high 4:4:4 intra profile.

  2. A New Adaptive Image Denoising Method

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.

  3. Adapting overcomplete wavelet models to natural images

    NASA Astrophysics Data System (ADS)

    Sallee, Phil; Olshausen, Bruno A.

    2003-11-01

    Overcomplete wavelet representations have become increasingly popular for their ability to provide highly sparse and robust descriptions of natural signals. We describe a method for incorporating an overcomplete wavelet representation as part of a statistical model of images which includes a sparse prior distribution over the wavelet coefficients. The wavelet basis functions are parameterized by a small set of 2-D functions. These functions are adapted to maximize the average log-likelihood of the model for a large database of natural images. When adapted to natural images, these functions become selective to different spatial orientations, and they achieve a superior degree of sparsity on natural images as compared with traditional wavelet bases. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Inference with the learned model is demonstrated for applications such as denoising, with results that compare favorably with other methods.

  4. Color calibration of a CMOS digital camera for mobile imaging

    NASA Astrophysics Data System (ADS)

    Eliasson, Henrik

    2010-01-01

    As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g., elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration, the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the same time, the demands for efficiency in the production environment require the calibration to be as simple as possible. Thus it is important to find the correct balance between image quality and production efficiency requirements. The purpose of this work is to investigate camera color variations using a simple model where the sensor and IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65 and F11. The sensor variations were determined from a very large population from which 6 corner samples were picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting images generated by the model were then analyzed in the CIELAB space and color errors were calculated using the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that the color temperature dependence is small enough to justify the use of only one light source in a production environment.

  5. Effects of chromatic image statistics on illumination induced color differences.

    PubMed

    Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels

    2013-09-01

    We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene. PMID:24323269

  6. Availability of color calibration for consistent color display in medical images and optimization of reference brightness for clinical use

    NASA Astrophysics Data System (ADS)

    Iwai, Daiki; Suganami, Haruka; Hosoba, Minoru; Ohno, Kazuko; Emoto, Yutaka; Tabata, Yoshito; Matsui, Norihisa

    2013-03-01

    Color image consistency has not been accomplished yet except the Digital Imaging and Communication in Medicine (DICOM) Supplement 100 for implementing a color reproduction pipeline and device independent color spaces. Thus, most healthcare enterprises could not check monitor degradation routinely. To ensure color consistency in medical color imaging, monitor color calibration should be introduced. Using simple color calibration device . chromaticity of colors including typical color (Red, Green, Blue, Green and White) are measured as device independent profile connection space value called u'v' before and after calibration. In addition, clinical color images are displayed and visual differences are observed. In color calibration, monitor brightness level has to be set to quite lower value 80 cd/m2 according to sRGB standard. As Maximum brightness of most color monitors available currently for medical use have much higher brightness than 80 cd/m2, it is not seemed to be appropriate to use 80 cd/m2 level for calibration. Therefore, we propose that new brightness standard should be introduced while maintaining the color representation in clinical use. To evaluate effects of brightness to chromaticity experimentally, brightness level is changed in two monitors from 80 to 270cd/m2 and chromaticity value are compared with each brightness levels. As a result, there are no significant differences in chromaticity diagram when brightness levels are changed. In conclusion, chromaticity is close to theoretical value after color calibration. Moreover, chromaticity isn't moved when brightness is changed. The results indicate optimized reference brightness level for clinical use could be set at high brightness in current monitors .

  7. Autonomous ship classification using synthetic and real color images

    NASA Astrophysics Data System (ADS)

    Kumlu, Deniz; Jenkins, B. Keith

    2013-03-01

    This work classifies color images of ships attained using cameras mounted on ships and in harbors. Our data-sets contain 9 different types of ship with 18 different perspectives for our training set, development set and testing set. The training data-set contains modeled synthetic images; development and testing data-sets contain real images. The database of real images was gathered from the internet, and 3D models for synthetic images were imported from Google 3D Warehouse. A key goal in this work is to use synthetic images to increase overall classification accuracy. We present a novel approach for autonomous segmentation and feature extraction for this problem. Support vector machine is used for multi-class classification. This work reports three experimental results for multi-class ship classification problem. First experiment trains on a synthetic image data-set and tests on a real image data-set, and obtained accuracy is 87.8%. Second experiment trains on a real image data-set and tests on a separate real image data-set, and obtained accuracy is 87.8%. Last experiment trains on real + synthetic image data-sets (combined data-set) and tests on a separate real image data-set, and obtained accuracy is 93.3%.

  8. Colored coded-apertures for spectral image unmixing

    NASA Astrophysics Data System (ADS)

    Vargas, Hector M.; Arguello Fuentes, Henry

    2015-10-01

    Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.

  9. Online monitoring of red meat color using hyperspectral imaging.

    PubMed

    Kamruzzaman, Mohammed; Makino, Yoshio; Oshita, Seiichi

    2016-06-01

    A hyperspectral imaging system in the spectral range of 400-1000 nm was tested to develop an online monitoring system for red meat (beef, lamb, and pork) color in the meat industry. Instead of selecting different sets of important wavelengths for beef, lamb, and pork, a set of feature wavelengths were selected using the successive projection algorithm for red meat colors (L*, a*, b) for convenient industrial application. Only six wavelengths (450, 460, 600, 620, 820, and 980 nm) were further chosen as predictive feature wavelengths for predicting L*, a*, and b* in red meat. Multiple linear regression models were then developed and predicted L*, a*, and b* with coefficients of determination (R(2)p) of 0.97, 0.84, and 0.82, and root mean square error of prediction of 1.72, 1.73, and 1.35, respectively. Finally, distribution maps of meat surface color were generated. The results indicated that hyperspectral imaging has the potential to be used for rapid assessment of meat color. PMID:26874594

  10. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    NASA Astrophysics Data System (ADS)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  11. False color image of Safsaf Oasis in southern Egypt

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color image of the uninhabited Safsaf Oasis in southern Egypt near the Egypt/Sudan border. It was produced from data obtained from the L-band and C-band radars that are part of the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar onboard the Shuttle Endeavour on April 9, 1994. The image is centered at 22 degrees North latitude, 29 degrees East longitude. It shows detailed structures of bedrock, and the dark blue sinuous lines are braided channels that occupy part of an old broad river valley. Virtually everything visible on this radar composite image cannot be seen either when standing on the ground or when viewing photographs or satellite images such as Landsat. The Jet Propulsion Laboratory alternative photo number is P-43920.

  12. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities. PMID:25991287

  13. Joint high dynamic range imaging and color demosaicing

    NASA Astrophysics Data System (ADS)

    Herwig, Johannes; Pauli, Josef

    2011-11-01

    A non-parametric high dynamic range (HDR) fusion approach is proposed that works on raw images of single-sensor color imaging devices which incorporate the Bayer pattern. Thereby the non-linear opto-electronic conversion function (OECF) is recovered before color demosaicing, so that interpolation artifacts do not aect the photometric calibration. Graph-based segmentation greedily clusters the exposure set into regions of roughly constant radiance in order to regularize the OECF estimation. The segmentation works on Gaussian-blurred sensor images, whereby the articial gray value edges caused by the Bayer pattern are smoothed away. With the OECF known the 32-bit HDR radiance map is reconstructed by weighted summation from the dierently exposed raw sensor images. Because the radiance map contains lower sensor noise than the individual images, it is nally demosaiced by weighted bilinear interpolation which prevents the interpolation across edges. Here, the previous segmentation results from the photometric calibration are utilized. After demosaicing, tone mapping is applied, whereby remaining interpolation artifacts are further damped due to the coarser tonal quantization of the resulting image.

  14. Quaternion structural similarity: a new quality index for color images.

    PubMed

    Kolaman, Amir; Yadid-Pecht, Orly

    2012-04-01

    One of the most important issues for researchers developing image processing algorithms is image quality. Methodical quality evaluation, by showing images to several human observers, is slow, expensive, and highly subjective. On the other hand, a visual quality matrix (VQM) is a fast, cheap, and objective tool for evaluating image quality. Although most VQMs are good in predicting the quality of an image degraded by a single degradation, they poorly perform for a combination of two degradations. An example for such degradation is the color crosstalk (CTK) effect, which introduces blur with desaturation. CTK is expected to become a bigger issue in image quality as the industry moves toward smaller sensors. In this paper, we will develop a VQM that will be able to better evaluate the quality of an image degraded by a combined blur/desaturation degradation and perform as well as other VQMs on single degradations such as blur, compression, and noise. We show why standard scalar techniques are insufficient to measure a combined blur/desaturation degradation and explain why a vectorial approach is better suited. We introduce quaternion image processing (QIP), which is a true vectorial approach and has many uses in the fields of physics and engineering. Our new VQM is a vectorial expansion of structure similarity using QIP, which gave it its name-Quaternion Structural SIMilarity (QSSIM). We built a new database of a combined blur/desaturation degradation and conducted a quality survey with human subjects. An extensive comparison between QSSIM and other VQMs on several image quality databases-including our new database-shows the superiority of this new approach in predicting visual quality of color images. PMID:22203713

  15. Digital image fusion systems: color imaging and low-light targets

    NASA Astrophysics Data System (ADS)

    Estrera, Joseph P.

    2009-05-01

    This paper presents digital image fusion (enhanced A+B) systems in color imaging and low light target applications. This paper will discuss first the digital sensors that are utilized in the noted image fusion applications which is a 1900x1086 (high definition format) CMOS imager coupled to a Generation III image intensifier for the visible/near infrared (NIR) digital sensor and 320x240 or 640x480 uncooled microbolometer thermal imager for the long wavelength infrared (LWIR) digital sensor. Performance metrics for these digital imaging sensors will be presented. The digital image fusion (enhanced A+B) process will be presented in context of early fused night vision systems such as the digital image fused system (DIFS) and the digital enhanced night vision goggle and later, the long range digitally fused night vision sighting system. Next, this paper will discuss the effects of user display color in a dual color digital image fusion system. Dual color image fusion schemes such as Green/Red, Cyan/Yellow, and White/Blue for image intensifier and thermal infrared sensor color representation, respectively, are discussed. Finally, this paper will present digitally fused imagery and image analysis of long distance targets in low light from these digital fused systems. The result of this image analysis with enhanced A+B digital image fusion systems is that maximum contrast and spatial resolution is achieved in a digital fusion mode as compared to individual sensor modalities in low light, long distance imaging applications. Paper has been cleared by DoD/OSR for Public Release under Ref: 08-S-2183 on August 8, 2008.

  16. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  17. Optimized mean shift algorithm for color segmentation in image sequences

    NASA Astrophysics Data System (ADS)

    Bailer, Werner; Schallauer, Peter; Haraldsson, Harald B.; Rehatschek, Herwig

    2005-03-01

    The application of the mean shift algorithm to color image segmentation has been proposed in 1997 by Comaniciu and Meer. We apply the mean shift color segmentation to image sequences, as the first step of a moving object segmentation algorithm. Previous work has shown that it is well suited for this task, because it provides better temporal stability of the segmentation result than other approaches. The drawback is higher computational cost. For speed up of processing on image sequences we exploit the fact that subsequent frames are similar and use the cluster centers of previous frames as initial estimates, which also enhances spatial segmentation continuity. In contrast to other implementations we use the originally proposed CIE LUV color space to ensure high quality segmentation results. We show that moderate quantization of the input data before conversion to CIE LUV has little influence on the segmentation quality but results in significant speed up. We also propose changes in the post-processing step to increase the temporal stability of border pixels. We perform objective evaluation of the segmentation results to compare the original algorithm with our modified version. We show that our optimized algorithm reduces processing time and increases the temporal stability of the segmentation.

  18. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  19. Validation of tablet-based evaluation of color fundus images

    PubMed Central

    Christopher, Mark; Moga, Daniela C.; Russell, Stephen R.; Folk, James C.; Scheetz, Todd; Abràmoff, Michael D.

    2012-01-01

    Purpose To compare diabetic retinopathy (DR) referral recommendations made by viewing fundus images using a tablet computer to recommendations made using a standard desktop display. Methods A tablet computer (iPad) and a desktop PC with a high-definition color display were compared. For each platform, two retinal specialists independently rated 1200 color fundus images from patients at risk for DR using an annotation program, Truthseeker. The specialists determined whether each image had referable DR, and also how urgently each patient should be referred for medical examination. Graders viewed and rated the randomly presented images independently and were masked to their ratings on the alternative platform. Tablet- and desktop display-based referral ratings were compared using cross-platform, intra-observer kappa as the primary outcome measure. Additionally, inter-observer kappa, sensitivity, specificity, and area under ROC (AUC) were determined. Results A high level of cross-platform, intra-observer agreement was found for the DR referral ratings between the platforms (κ=0.778), and for the two graders, (κ=0.812). Inter-observer agreement was similar for the two platforms (κ=0.544 and κ=0.625 for tablet and desktop, respectively). The tablet-based ratings achieved a sensitivity of 0.848, a specificity of 0.987, and an AUC of 0.950 compared to desktop display-based ratings. Conclusions In this pilot study, tablet-based rating of color fundus images for subjects at risk for DR was consistent with desktop display-based rating. These results indicate that tablet computers can be reliably used for clinical evaluation of fundus images for DR. PMID:22495326

  20. Local Skin Warming Enhances Color Duplex Imaging of Cutaneous Perforators.

    PubMed

    Li, Haizhou; Du, Zijing; Xie, Feng; Zan, Tao; Li, QingFeng

    2015-07-01

    The perforator flap is one of the most useful techniques in reconstructive surgery. The operative procedure for these flaps will be greatly simplified if accurate localization of the course of the perforator can be preoperatively confirmed. However, small vessels with diameters less than 0.5 mm cannot be readily traced with conventional imaging techniques. Local skin warming temporarily increases cutaneous blood flow and vasodilation. In this study, we established a local skin warming procedure, and performed this before color duplex imaging to improve preoperative perforator mapping and enable precise flap design. PMID:23903089

  1. PCIF: An Algorithm for Lossless True Color Image Compression

    NASA Astrophysics Data System (ADS)

    Barcucci, Elena; Brlek, Srecko; Brocchi, Stefano

    An efficient algorithm for compressing true color images is proposed. The technique uses a combination of simple and computationally cheap operations. The three main steps consist of predictive image filtering, decomposition of data, and data compression through the use of run length encoding, Huffman coding and grouping the values into polyominoes. The result is a practical scheme that achieves good compression while providing fast decompression. The approach has performance comparable to, and often better than, competing standards such JPEG 2000 and JPEG-LS.

  2. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  3. The Athena Pancam and Color Microscopic Imager (CMI)

    NASA Technical Reports Server (NTRS)

    Bell, J. F., III; Herkenhoff, K. E.; Schwochert, M.; Morris, R. V.; Sullivan, R.

    2000-01-01

    The Athena Mars rover payload includes two primary science-grade imagers: Pancam, a multispectral, stereo, panoramic camera system, and the Color Microscopic Imager (CMI), a multispectral and variable depth-of-field microscope. Both of these instruments will help to achieve the primary Athena science goals by providing information on the geology, mineralogy, and climate history of the landing site. In addition, Pancam provides important support for rover navigation and target selection for Athena in situ investigations. Here we describe the science goals, instrument designs, and instrument performance of the Pancam and CMI investigations.

  4. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  5. Shear wave transmissivity measurement by color Doppler shear wave imaging

    NASA Astrophysics Data System (ADS)

    Yamakoshi, Yoshiki; Yamazaki, Mayuko; Kasahara, Toshihiro; Sunaguchi, Naoki; Yuminaka, Yasushi

    2016-07-01

    Shear wave elastography is a useful method for evaluating tissue stiffness. We have proposed a novel shear wave imaging method (color Doppler shear wave imaging: CD SWI), which utilizes a signal processing unit in ultrasound color flow imaging in order to detect the shear wave wavefront in real time. Shear wave velocity is adopted to characterize tissue stiffness; however, it is difficult to measure tissue stiffness with high spatial resolution because of the artifact produced by shear wave diffraction. Spatial average processing in the image reconstruction method also degrades the spatial resolution. In this paper, we propose a novel measurement method for the shear wave transmissivity of a tissue boundary. Shear wave wavefront maps are acquired by changing the displacement amplitude of the shear wave and the transmissivity of the shear wave, which gives the difference in shear wave velocity between two mediums separated by the boundary, is measured from the ratio of two threshold voltages required to form the shear wave wavefronts in the two mediums. From this method, a high-resolution shear wave amplitude imaging method that reconstructs a tissue boundary is proposed.

  6. False-Color-Image Map of Quadrangle 3266, Ourzgan (519) and Moqur (520) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  7. False-Color-Image Map of Quadrangle 3164, Lashkargah (605) and Kandahar (606) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  8. False-Color-Image Map of Quadrangle 3564, Chahriaq (Joand) (405) and Gurziwan (406) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  9. False-Color-Image Map of Quadrangle 3162, Chakhansur (603) and Kotalak (604) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  10. False-Color-Image Map of Quadrangle 3464, Shahrak (411) and Kasi (412) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  11. False-Color-Image Map of Quadrangle 3568, Polekhomri (503) and Charikar (504) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  12. False-Color-Image Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  13. Content-Based Image Retrieval Using a Composite Color-Shape Approach.

    ERIC Educational Resources Information Center

    Mehtre, Babu M.; Kankanhalli, Mohan S.; Lee, Wing Foon

    1998-01-01

    Proposes a composite feature measure which combines the shape and color features of an image based on a clustering technique. A similarity measure computes the degree of match between a given pair of images; this technique can be used for content-based image retrieval of images using shape and/or color. Tests the technique on two image databases;…

  14. Data Hiding Scheme on Medical Image using Graph Coloring

    NASA Astrophysics Data System (ADS)

    Astuti, Widi; Adiwijaya; Novia Wisety, Untari

    2015-06-01

    The utilization of digital medical images is now widely spread[4]. The medical images is supposed to get protection since it has probability to pass through unsecure network. Several watermarking techniques have been developed so that the digital medical images can be guaranteed in terms of its originality. In watermarking, the medical images becomes a protected object. Nevertheless, the medical images can actually be a medium of hiding secret data such as patient medical record. The data hiding is done by inserting data into image - usually called steganography in images. Because the medical images can influence the diagnose change, steganography will only be applied to non-interest region. Vector Quantization (VQ) is one of lossydata compression technique which is sufficiently prominent and frequently used. Generally, the VQ based steganography scheme still has limitation in terms of the data capacity which can be inserted. This research is aimed to make a Vector Quantization-based steganography scheme and graph coloring. The test result shows that the scheme can insert 28768 byte data which equals to 10077 characters for images area of 3696 pixels.

  15. Automatic assessment of macular edema from color retinal images.

    PubMed

    Deepak, K Sai; Sivaswamy, Jayanthi

    2012-03-01

    Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using a rotational asymmetry metric by examining the symmetry of macular region. The performance of the proposed methodology and features are evaluated against several publicly available datasets. The detection performance has a sensitivity of 100% with specificity between 74% and 90%. Cases needing immediate referral are detected with a sensitivity of 100% and specificity of 97%. The severity classification accuracy is 81% for the moderate case and 100% for severe cases. These results establish the effectiveness of the proposed solution. PMID:22167598

  16. A perceptually tuned watermarking scheme for color images.

    PubMed

    Chou, Chun-Hsien; Liu, Kuo-Cheng

    2010-11-01

    Transparency and robustness are two conflicting requirements demanded by digital image watermarking for copyright protection and many other purposes. A feasible way to simultaneously satisfy the two conflicting requirements is to embed high-strength watermark signals in the host signals that can accommodate the distortion due to watermark insertion as part of perceptual redundancy. The search of distortion-tolerable host signals for watermark insertion and the determination of watermark strength are hence crucial to the realization of a transparent yet robust watermark. This paper presents a color image watermarking scheme that hides watermark signals in most distortion-tolerable signals within three color channels of the host image without resulting in perceivable distortion. The distortion-tolerable host signals or the signals that possess high perceptual redundancy are sought in the wavelet domain for watermark insertion. A visual model based on the CIEDE2000 color difference equation is used to measure the perceptual redundancy inherent in each wavelet coefficient of the host image. By means of quantization index modulation, binary watermark signals are embedded in qualified wavelet coefficients. To reinforce the robustness, the watermark signals are repeated and permuted before embedding, and restored by the majority-vote decision making process in watermark extraction. Original images are not required in watermark extraction. Only a small amount of information including locations of qualified coefficients and the data associated with coefficient quantization is needed for watermark extraction. Experimental results show that the embedded watermark is transparent and quite robust in face of various attacks such as cropping, low-pass filtering, scaling, media filtering, white-noise addition as well as the JPEG and JPEG2000 coding at high compression ratios. PMID:20529748

  17. Adaptive contrast imaging: transmit frequency optimization

    NASA Astrophysics Data System (ADS)

    Ménigot, Sébastien; Novell, Anthony; Voicu, Iulian; Bouakaz, Ayache; Girault, Jean-Marc

    2010-01-01

    Introduction: Since the introduction of ultrasound (US) contrast imaging, the imaging systems use a fixed emitting frequency. However it is known that the insonified medium is time-varying and therefore an adapted time-varying excitation is expected. We suggest an adaptive imaging technique which selects the optimal transmit frequency that maximizes the acoustic contrast. Two algorithms have been proposed to find an US excitation for which the frequency was optimal with microbubbles. Methods and Materials: Simulations were carried out for encapsulated microbubbles of 2 microns by considering the modified Rayleigh-Plesset equation for 2 MHz transmit frequency and for various pressure levels (20 kPa up to 420kPa). In vitro experiments were carried out using a transducer operating at 2 MHz and using a programmable waveform generator. Contrast agent was then injected into a small container filled with water. Results and discussions: We show through simulations and in vitro experiments that our adaptive imaging technique gives: 1) in case of simulations, a gain of acoustic contrast which can reach 9 dB compared to the traditional technique without optimization and 2) for in vitro experiments, a gain which can reach 18 dB. There is a non negligible discrepancy between simulations and experiments. These differences are certainly due to the fact that our simulations do not take into account the diffraction and nonlinear propagation effects. Further optimizations are underway.

  18. Approach for reconstructing anisoplanatic adaptive optics images.

    PubMed

    Aubailly, Mathieu; Roggemann, Michael C; Schulz, Timothy J

    2007-08-20

    Atmospheric turbulence corrupts astronomical images formed by ground-based telescopes. Adaptive optics systems allow the effects of turbulence-induced aberrations to be reduced for a narrow field of view corresponding approximately to the isoplanatic angle theta(0). For field angles larger than theta(0), the point spread function (PSF) gradually degrades as the field angle increases. We present a technique to estimate the PSF of an adaptive optics telescope as function of the field angle, and use this information in a space-varying image reconstruction technique. Simulated anisoplanatic intensity images of a star field are reconstructed by means of a block-processing method using the predicted local PSF. Two methods for image recovery are used: matrix inversion with Tikhonov regularization, and the Lucy-Richardson algorithm. Image reconstruction results obtained using the space-varying predicted PSF are compared to space invariant deconvolution results obtained using the on-axis PSF. The anisoplanatic reconstruction technique using the predicted PSF provides a significant improvement of the mean squared error between the reconstructed image and the object compared to the deconvolution performed using the on-axis PSF. PMID:17712366

  19. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  20. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  1. Color imaging the magnetic field distribution in superconductors

    SciTech Connect

    Batalla, E.; Zwartz, E.G.; Goudreault, R.; Wright, L.S. )

    1990-08-01

    A magneto-optically active glass was used to image the magnetic field distribution in superconductors using the Faraday effect. Polarized white light illumination of the glass resulted in various colors depending on the setting of the analyzing polaroid. These colors are shown to be consistent with the known dependence of the Faraday rotation angle on the applied magnetic field, the temperature of the glass, and the wavelength of the light. This technique was used to observe field distributions in polycrystalline and single-crystal YBa{sub 2}Cu{sub 3}O{sub 7} samples. In the ceramic sample, the field was uniform within the resolution (50 {mu}m) of this technique and field magnitudes were measured with a 10% accuracy. In the single crystal, the magnetic field distribution was not uniform showing field gradients imaged as color gradients on the pictures of the glass. Contours of constant magnetic field were drawn from these photographs and from these, a critical current density of 10{sup 9} A/m{sup 2} was deduced in an external field of 136 mT.

  2. False-color composite image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color composite of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. The look angle of the radar is 30 degrees and the size of the image is approximately 20 kilometers by 50 kilometers (12 by 30 miles). Most of the dark areas in the image are the ice-covered lakes in the region. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of Highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake

  3. Three frequency false-color image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency, false color image of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. It was produced using data from the X-band, C-band and L-band radars that comprise the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). SIR-C/X-SAR acquired this image on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. Most of the dark blue areas in the image are the ice covered lakes. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are shown by light

  4. Edge-supressed color clustering for image thresholding

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Uijt de Haag, Maarten

    2000-03-01

    This paper discusses the development of an iterative algorithm for fully automatic (gross or fine) segmentation of color images. The basic idea here is to automate segmentation for on-line operations. This is needed for such critical applications as internet communication, video indexing, target tracking, visual guidance, remote control, and motion detection. The method is composed of an edge-suppressed clustering (learning) and principal component thresholding (classification) step. In the learning phase, image clusters are well formed in the (R,G,B) space by considering only the non-edge points. The unknown number (N) of mutually exclusive image segments is learned in an unsupervised operation mode developed based on the cluster fidelity measure and K-means algorithm. The classification phase is a correlation-based segmentation strategy that operates in the K-L transform domain using the Otsu thresholding principal. It is demonstrated experimentally that the method is effective and efficient for color images of natural scenes with irregular textures and objects of varying sizes and dimension.

  5. Extending the depth-of-field for microscopic imaging by means of multifocus color image fusion

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, R.; Toxqui-Quitl, C.; Padilla-Vivanco, A.; Ortega-Mendoza, G.

    2015-09-01

    In microscopy, the depth of field (DOF) is limited by the physical characteristics of imaging systems. Imaging a scene with the all the field of view in focus can be an impossible task to achieve. In this paper, metal samples are inspected on multiple focal planes by moving the microscope stage along the z - axis and for each z plane, an image is digitalized. Through digital image processing, an image with all the focused regions is generated from a set of multi focus images. The proposed fusion algorithm gives a single sharp image. The merger scheme is simple, fast and virtually free of artifacts or false color. Experimental fusion results are shown.

  6. Butterfly wing coloration studied with a novel imaging scatterometer

    NASA Astrophysics Data System (ADS)

    Stavenga, Doekele

    2010-03-01

    Animal coloration functions for display or camouflage. Notably insects provide numerous examples of a rich variety of the applied optical mechanisms. For instance, many butterflies feature a distinct dichromatism, that is, the wing coloration of the male and the female differ substantially. The male Brimstone, Gonepteryx rhamni, has yellow wings that are strongly UV iridescent, but the female has white wings with low reflectance in the UV and a high reflectance in the visible wavelength range. In the Small White cabbage butterfly, Pieris rapae crucivora, the wing reflectance of the male is low in the UV and high at visible wavelengths, whereas the wing reflectance of the female is higher in the UV and lower in the visible. Pierid butterflies apply nanosized, strongly scattering beads to achieve their bright coloration. The male Pipevine Swallowtail butterfly, Battus philenor, has dorsal wings with scales functioning as thin film gratings that exhibit polarized iridescence; the dorsal wings of the female are matte black. The polarized iridescence probably functions in intraspecific, sexual signaling, as has been demonstrated in Heliconius butterflies. An example of camouflage is the Green Hairstreak butterfly, Callophrys rubi, where photonic crystal domains exist in the ventral wing scales, resulting in a matte green color that well matches the color of plant leaves. The spectral reflection and polarization characteristics of biological tissues can be rapidly and with unprecedented detail assessed with a novel imaging scatterometer-spectrophotometer, built around an elliptical mirror [1]. Examples of butterfly and damselfly wings, bird feathers, and beetle cuticle will be presented. [4pt] [1] D.G. Stavenga, H.L. Leertouwer, P. Pirih, M.F. Wehling, Optics Express 17, 193-202 (2009)

  7. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, C.‰line; Gosselin, Bernard

    2005-01-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  8. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, Céline; Gosselin, Bernard

    2004-12-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  9. Iterative blind deconvolution of adaptive optics images

    NASA Astrophysics Data System (ADS)

    Liang, Ying; Rao, Changhui; Li, Mei; Geng, Zexun

    2006-04-01

    Adaptive optics (AO) technique has been extensively used for large ground-based optical telescopes to overcome the effect of atmospheric turbulence. But the correction is often partial. An iterative blind deconvolution (IBD) algorithm based on maximum-likelihood (ML) method is proposed to restore the details of the object image corrected by AO. IBD algorithm and the procedure are briefly introduced and the experiment results are presented. The results show that IBD algorithm is efficient for the restoration of some useful high-frequency of the image.

  10. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.]. PMID:27548458

  11. Automated rice leaf disease detection using color image analysis

    NASA Astrophysics Data System (ADS)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  12. From printed color to image appearance: tool for advertising assessment

    NASA Astrophysics Data System (ADS)

    Bonanomi, Cristian; Marini, Daniele; Rizzi, Alessandro

    2012-07-01

    We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.

  13. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    SciTech Connect

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  14. Characterizing pigments with hyperspectral imaging variable false-color composites

    NASA Astrophysics Data System (ADS)

    Hayem-Ghez, Anita; Ravaud, Elisabeth; Boust, Clotilde; Bastian, Gilles; Menu, Michel; Brodie-Linder, Nancy

    2015-11-01

    Hyperspectral imaging has been used for pigment characterization on paintings for the last 10 years. It is a noninvasive technique, which mixes the power of spectrophotometry and that of imaging technologies. We have access to a visible and near-infrared hyperspectral camera, ranging from 400 to 1000 nm in 80-160 spectral bands. In order to treat the large amount of data that this imaging technique generates, one can use statistical tools such as principal component analysis (PCA). To conduct the characterization of pigments, researchers mostly use PCA, convex geometry algorithms and the comparison of resulting clusters to database spectra with a specific tolerance (like the Spectral Angle Mapper tool on the dedicated software ENVI). Our approach originates from false-color photography and aims at providing a simple tool to identify pigments thanks to imaging spectroscopy. It can be considered as a quick first analysis to see the principal pigments of a painting, before using a more complete multivariate statistical tool. We study pigment spectra, for each kind of hue (blue, green, red and yellow) to identify the wavelength maximizing spectral differences. The case of red pigments is most interesting because our methodology can discriminate the red pigments very well—even red lakes, which are always difficult to identify. As for the yellow and blue categories, it represents a good progress of IRFC photography for pigment discrimination. We apply our methodology to study the pigments on a painting by Eustache Le Sueur, a French painter of the seventeenth century. We compare the results to other noninvasive analysis like X-ray fluorescence and optical microscopy. Finally, we draw conclusions about the advantages and limits of the variable false-color image method using hyperspectral imaging.

  15. Color microscopy image segmentation using competitive learning and fuzzy Kohonen networks

    NASA Astrophysics Data System (ADS)

    Gaddipatti, Ajeetkumar; Vince, David G.; Cothren, Robert M., Jr.; Cornhill, J. Fredrick

    1998-06-01

    Over the past decade, there has been increased interest in quantifying cell populations in tissue sections. Image analysis is now being used for analysis in limited pathological applications, such as PAP smear evaluation, with the dual aim of increasing for accuracy of diagnosis and reducing the review time. These applications primarily used gray scale images and dealt with cytological smears in which cells were well separated. Quantification of routinely stained tissue represented a more difficult problem in that objects could not be separated in gray scale as part of the background could also have the same intensity as the objects of interest. Many of the existing semiautomatic algorithms were specific to a particular application and were computationally expensive. Hence, this paper investigates the general adaptive automated color segmentation approaches, which alleviate these problems. In particular, competitive learning and the fuzzy-kohonen networks are studied. Four adaptive segmentation algorithms are compared using synthetic images and clinical microscopy slide images. Both qualitative and quantitative performance comparisons are performed with the clinical images. A method for finding the optimal number of clusters in the image is also validated. Finally the merits and feasibility of including contextual information in the segmentation are discussed along with future directions.

  16. Context cue-dependent saccadic adaptation in rhesus macaques cannot be elicited using color.

    PubMed

    Cecala, Aaron L; Smalianchuk, Ivan; Khanna, Sanjeev B; Smith, Matthew A; Gandhi, Neeraj J

    2015-07-01

    When the head does not move, rapid movements of the eyes called saccades are used to redirect the line of sight. Saccades are defined by a series of metrical and kinematic (evolution of a movement as a function of time) relationships. For example, the amplitude of a saccade made from one visual target to another is roughly 90% of the distance between the initial fixation point (T0) and the peripheral target (T1). However, this stereotypical relationship between saccade amplitude and initial retinal error (T1-T0) may be altered, either increased or decreased, by surreptitiously displacing a visual target during an ongoing saccade. This form of motor learning (called saccadic adaptation) has been described in both humans and monkeys. Recent experiments in humans and monkeys have suggested that internal (proprioceptive) and external (target shape, color, and/or motion) cues may be used to produce context-dependent adaptation. We tested the hypothesis that an external contextual cue (target color) could be used to evoke differential gain (actual saccade/initial retinal error) states in rhesus monkeys. We did not observe differential gain states correlated with target color regardless of whether targets were displaced along the same vector as the primary saccade or perpendicular to it. Furthermore, this observation held true regardless of whether adaptation trials using various colors and intrasaccade target displacements were randomly intermixed or presented in short or long blocks of trials. These results are consistent with hypotheses that state that color cannot be used as a contextual cue and are interpreted in light of previous studies of saccadic adaptation in both humans and monkeys. PMID:25995353

  17. Structure of mouse spleen investigated by 7-color fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Tsurui, Hiromichi; Niwa, Shinichirou; Hirose, Sachiko; Okumura, Ko; Shirai, Toshikazu

    2001-07-01

    Multi-color fluorescence imaging of tissue samples has been an urgent requirement in current biology. As far as fluorescence signals should be isolated by optical bandpass filter-sets, rareness of the combination of chromophores with little spectral overlap has hampered to satisfy this demand. Additivity of signals in a fluorescence image accepts applying linear unmixing of superposed spectra based on singular value decomposition, hence complete separation of the fluorescence signals fairly overlapping each other. We have developed 7-color fluorescence imaging based on this principle and applied the method to the investigation of mouse spleen. Not only rough structural features in a spleen such as red pulp, marginal zone, and white pulp, but also fine structures of them, periarteriolar lymphocyte sheath (PALS), follicle, and germinal center were clearly pictured simultaneously. The distributions of subsets of dendritic cells (DC) and macrophages (M(phi) ) markers such as BM8, F4/80, MOMA2 and Mac3 around the marginal zone were imagined simultaneously. Their inhomogeneous expressions were clearly demonstrated. These results show the usefulness of the method in the study of the structure that consists of many kinds of cells and in the identification of cells characterized by multiple markers.

  18. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works. PMID:26759756

  19. SRTM Radar Image with Color as Height: Kachchh, Gujarat, India

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image shows the area around the January 26, 2001, earthquake in western India, the deadliest in the country's history with some 20,000 fatalities. The epicenter of the magnitude 7.6 earthquake was just to the left of the center of the image. The Gulf of Kachchh (or Kutch) is the black area running from the lower left corner towards the center of the image. The city of Bhuj is in the yellow-toned area among the brown hills left of the image center and is the historical capital of the Kachchh region. Bhuj and many other towns and cities nearby were almost completely destroyed by the shaking of the earthquake. These hills reach up to 500 meters (1,500 feet) elevation. The city of Ahmedabad, capital of Gujarat state, is the radar-bright area next to the right side of the image. Several buildings in Ahmedabad were also destroyed by the earthquake. The dark blue areas around the center of the image and extending to the left side are low-lying salt flats called the Rann of Kachchh with the Little Rann just to the right of the image center. The bumpy area north of the Rann (green and yellow colors) is a large area of sand dunes in Pakistan. A branch of the Indus River used to flow through the area on the left side of this image, but it was diverted by a previous large earthquake that struck this area in 1819.

    The annotated version of the image includes a 'beachball' that shows the location and slip direction of the January 26, 2001, earthquake from the Harvard Quick CMT catalog: http://www.seismology.harvard.edu/CMTsearch.html. [figure removed for brevity, see original site]

    This image combines two types of data from the Shuttle Radar Topography Mission (SRTM). The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to brown and white at the highest elevations. This image is a mosaic of four SRTM swaths.

    This image

  20. Los Angeles, California, Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationships of the dense urban development of Los Angeles and the natural contours of the land. The image includes the Pacific Ocean on the left, the flat Los Angeles Basin across the center, and the steep ranges of the Santa Monica and Verdugo mountains along the top. The two dark strips near the coast at lower left are the runways of Los Angeles International Airport. Downtown Los Angeles is the bright yellow and pink area at lower center. Pasadena, including the Rose Bowl, are seen half way down the right edge of the image. The communities of Glendale and Burbank, including the Burbank Airport, are seen at the center of the top edge of the image. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities such as Leangles.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between

  1. High-performance VGA-resolution digital color CMOS imager

    NASA Astrophysics Data System (ADS)

    Agwani, Suhail; Domer, Steve; Rubacha, Ray; Stanley, Scott

    1999-04-01

    This paper discusses the performance of a new VGA resolution color CMOS imager developed by Motorola on a 0.5micrometers /3.3V CMOS process. This fully integrated, high performance imager has on chip timing, control, and analog signal processing chain for digital imaging applications. The picture elements are based on 7.8micrometers active CMOS pixels that use pinned photodiodes for higher quantum efficiency and low noise performance. The image processing engine includes a bank of programmable gain amplifiers, line rate clamping for dark offset removal, real time auto white balancing, per column gain and offset calibration, and a 10 bit pipelined RSD analog to digital converter with a programmable input range. Post ADC signal processing includes features such as bad pixel replacement based on user defined thresholds levels, 10 to 8 bit companding and 5 tap FIR filtering. The sensor can be programmed via a standard I2C interface that runs on 3.3V clocks. Programmable features include variable frame rates using a constant frequency master clock, electronic exposure control, continuous or single frame capture, progressive or interlace scanning modes. Each pixel is individually addressable allowing region of interest imaging and image subsampling. The sensor operates with master clock frequencies of up to 13.5MHz resulting in 30FPS. A total programmable gain of 27dB is available. The sensor power dissipation is 400mW at full speed of operation. The low noise design yields a measured 'system on a chip' dynamic range of 50dB thus giving over 8 true bits of resolution. Extremely high conversion gain result in an excellent peak sensitivity of 22V/(mu) J/cm2 or 3.3V/lux-sec. This monolithic image capture and processing engine represent a compete imaging solution making it a true 'camera on a chip'. Yet in its operation it remains extremely easy to use requiring only one clock and a 3.3V power supply. Given the available features and performance levels, this sensor will be

  2. Quaternionic Local Ranking Binary Pattern: A Local Descriptor of Color Images.

    PubMed

    Lan, Rushi; Zhou, Yicong; Tang, Yuan Yan

    2016-02-01

    This paper proposes a local descriptor called quaternionic local ranking binary pattern (QLRBP) for color images. Different from traditional descriptors that are extracted from each color channel separately or from vector representations, QLRBP works on the quaternionic representation (QR) of the color image that encodes a color pixel using a quaternion. QLRBP is able to handle all color channels directly in the quaternionic domain and include their relations simultaneously. Applying a Clifford translation to QR of the color image, QLRBP uses a reference quaternion to rank QRs of two color pixels, and performs a local binary coding on the phase of the transformed result to generate local descriptors of the color image. Experiments demonstrate that the QLRBP outperforms several state-of-the-art methods. PMID:26672041

  3. Three frequency false color image of Flevoland, the Netherlands

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency false color image of Flevoland, the Netherlands, centered at 52.4 degrees north latitude, 5.4 degrees east longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Shuttle Endeavour. The area shown covers an area approximately 25 kilometers by 28 kilometers. Flevoland, which fills the lower two-thirds of the image, is a very flat area that is made up of reclaimed land that is used for agriculture and forestry. At the top of the image, across the canal from Flevoland, is an older forest shown in red; the city of Harderwijk is shown in white on the shore of the canal. At this time of the year, the agricultural fields are bare soil, and they show up in this image in blue. The dark blue areas are water and the small dots in the canal are boats. The Jet Propulsion Laboratory alternative photo number is P-43941.

  4. Radar Image with Color as Height, Ancharn Kuy, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Ancharn Kuy, Cambodia, was taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR). The image depicts an area northwest of Angkor Wat. The radar has highlighted a number of circular village mounds in this region, many of which have a circular pattern of rice fields surrounding the slightly elevated site. Most of them have evidence of what seems to be pre-Angkor occupation, such as stone tools and potsherds. Most of them also have a group of five spirit posts, a pattern not found in other parts of Cambodia. The shape of the mound, the location in the midst of a ring of rice fields, the stone tools and the current practice of spirit veneration have revealed themselves through a unique 'marriage' of radar imaging, archaeological investigation, and anthropology.

    Ancharn Kuy is a small village adjacent to the road, with just this combination of features. The region gets slowly higher in elevation, something seen in the shift of color from yellow to blue as you move to the top of the image.

    The small dark rectangles are typical of the smaller water control devices employed in this area. While many of these in the center of Angkor are linked to temples of the 9th to 14th Century A.D., we cannot be sure of the construction date of these small village tanks. They may pre-date the temple complex, or they may have just been dug ten years ago!

    The image dimensions are approximately 4.75 by 4.3 kilometers (3 by 2.7 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color; that is going from blue to red to yellow to green and back to blue again; corresponds to 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif

  5. Time course of adaptation to stimuli presented along cardinal lines in color space

    NASA Astrophysics Data System (ADS)

    Hughes, Alan; Demarco, Paul J.

    2003-12-01

    Visual sensitivity is a process that allows the visual system to maintain optimal response over a wide range of ambient light levels and chromaticities. Several studies have used variants of the probe-flash paradigm to show that the time course of adaptation to abrupt changes in ambient luminance depends on both receptoral and postreceptoral mechanisms. Though a few studies have explored how these processes govern adaptation to color changes, most of this effort has targeted the L-M-cone pathway. The purpose of our work was to use the probe-flash paradigm to more fully explore light adaptation in both the L-M- and the S-cone pathways. We measured sensitivity to chromatic probes presented after the onset of a 2-s chromatic flash. Test and flash stimuli were spatially coextensive 2° fields presented in Maxwellian view. Flash stimuli were presented as excursions from white and could extended in one of two directions along an equiluminant L-M-cone or S-cone line. Probes were presented as excursions from the adapting flash chromaticity and could extend either toward the spectrum locus or toward white. For both color lines, the data show a fast and slow adaptation component, although this was less evident in the S-cone data. The fast and slow components were modeled as first- and second-site adaptive processes, respectively. We find that the time course of adaptation is different for the two cardinal pathways. In addition, the time course for S-cone stimulation is polarity dependent. Our results characterize the rapid time course of adaptation in the chromatic pathways and reveal that the mechanics of adaptation within the S-cone pathway are distinct from those in the L-M-cone pathways.

  6. Estimation of spectral transmittance curves from RGB images in color digital holographic microscopy using speckle illuminations

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Tokuno, Yuta; Aizu, Yoshihisa

    2016-06-01

    We investigate the estimation of spectral transmittance curves in color digital holographic microscopy using speckle illuminations. In color digital holography, it has the disadvantage in that the color-composite image gives poor color information due to the use of lasers with the two or three wavelengths. To overcome this disadvantage, the Wiener estimation method and an averaging process using multiple holograms are applied to color digital holographic microscopy. Estimated spectral transmittance and color-composite images are shown to indicate the usefulness of the proposed method.

  7. Color Measurement of Tea Leaves at Different Drying Periods Using Hyperspectral Imaging Technique

    PubMed Central

    Xie, Chuanqi; Li, Xiaoli; Shao, Yongni; He, Yong

    2014-01-01

    This study investigated the feasibility of using hyperspectral imaging technique for nondestructive measurement of color components (ΔL*, Δa* and Δb*) and classify tea leaves during different drying periods. Hyperspectral images of tea leaves at five drying periods were acquired in the spectral region of 380–1030 nm. The three color features were measured by the colorimeter. Different preprocessing algorithms were applied to select the best one in accordance with the prediction results of partial least squares regression (PLSR) models. Competitive adaptive reweighted sampling (CARS) and successive projections algorithm (SPA) were used to identify the effective wavelengths, respectively. Different models (least squares-support vector machine [LS-SVM], PLSR, principal components regression [PCR] and multiple linear regression [MLR]) were established to predict the three color components, respectively. SPA-LS-SVM model performed excellently with the correlation coefficient (rp) of 0.929 for ΔL*, 0.849 for Δa*and 0.917 for Δb*, respectively. LS-SVM model was built for the classification of different tea leaves. The correct classification rates (CCRs) ranged from 89.29% to 100% in the calibration set and from 71.43% to 100% in the prediction set, respectively. The total classification results were 96.43% in the calibration set and 85.71% in the prediction set. The result showed that hyperspectral imaging technique could be used as an objective and nondestructive method to determine color features and classify tea leaves at different drying periods. PMID:25546335

  8. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  9. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  10. Functional magnetic resonance imaging adaptation reveals a noncategorical representation of hue in early visual cortex

    PubMed Central

    Persichetti, Andrew S.; Thompson-Schill, Sharon L.; Butt, Omar H.; Brainard, David H.; Aguirre, Geoffrey K.

    2015-01-01

    Color names divide the fine-grained gamut of color percepts into discrete categories. A categorical transition must occur somewhere between the initial encoding of the continuous spectrum of light by the cones and the verbal report of the name of a color stimulus. Here, we used a functional magnetic resonance imaging (fMRI) adaptation experiment to examine the representation of hue in the early visual cortex. Our stimuli varied in hue between blue and green. We found in the early visual areas (V1, V2/3, and hV4) a smoothly increasing recovery from adaptation with increasing hue distance between adjacent stimuli during both passive viewing (Experiment 1) and active categorization (Experiment 2). We examined the form of the adaptation effect and found no evidence that a categorical representation mediates the release from adaptation for stimuli that cross the blue–green color boundary. Examination of the direct effect of stimulus hue on the fMRI response did, however, reveal an enhanced response to stimuli near the blue–green category border. This was largest in hV4 and when subjects were engaged in active categorization of the stimulus hue. In contrast with a recent report from another laboratory (Bird, Berens, Horner, & Franklin, 2014), we found no evidence for a categorical representation of color in the middle frontal gyrus. A post hoc whole-brain analysis, however, revealed several regions in the frontal cortex with a categorical effect in the adaptation response. Overall, our results support the idea that the representation of color in the early visual cortex is primarily fine grained and does not reflect color categories. PMID:26024465

  11. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1999-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1". The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have successfully used adaptive optics on a 4-m class telescope to obtain 0.1" resolution images of solar system objects in the far red and near infrared (0.7-2.5 microns), aE wavelengths which best discl"lmlnate their spectral signatures. Our efforts have been put into areas of research for which high angular resolution is essential.

  12. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1997-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1 sec. The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have been using adaptive optics (AO) on a 4-m class telescope to obtain 0.1 sec resolution images solar system objects at far red and near infrared wavelengths (0.7-2.5 micron) which best discriminate their spectral signatures. Our efforts has been put into areas of research for which high angular resolution is essential, such as the mapping of Titan and of large asteroids, the dynamics and composition of Neptune stratospheric clouds, the infrared photometry of Pluto, Charon, and close satellites previously undetected from the ground.

  13. Automatic sputum color image segmentation for tuberculosis diagnosis

    NASA Astrophysics Data System (ADS)

    Forero-Vargas, Manuel G.; Sierra-Ballen, Eduard L.; Alvarez-Borrego, Josue; Pech-Pacheco, Jose L.; Cristobal-Perez, Gabriel; Alcala, Luis; Desco, Manuel

    2001-11-01

    Tuberculosis (TB) and other mycobacteriosis are serious illnesses which control is mainly based on presumptive diagnosis. Besides of clinical suspicion, the diagnosis of mycobacteriosis must be done through genus specific smears of clinical specimens. However, these techniques lack of sensitivity and consequently clinicians must wait culture results as much as two months. Computer analysis of digital images from these smears could improve sensitivity of the test and, moreover, decrease workload of the micobacteriologist. Bacteria segmentation of particular species entails a complex process. Bacteria shape is not enough as a discriminant feature, because there are many species that share the same shape. Therefore the segmentation procedure requires to be improved using the color image information. In this paper we present two segmentation procedures based on fuzzy rules and phase-only correlation techniques respectively that will provide the basis of a future automatic particle' screening.

  14. Color image segmentation using vector angle-based region growing

    NASA Astrophysics Data System (ADS)

    Wesolkowski, Slawo; Fieguth, Paul W.

    2002-06-01

    A new region growing color image segmentation algorithm is presented in this paper. This algorithm is invariant to highlights and shading. This is accomplished in two steps. First, the average pixel intensity is removed from each RGB coordinate. This transformation mitigates the effects of highlights. Next, region seeds are obtained using the Mixture of Principal Components algorithm. Each region is characterized using two parameters. The first is the distance between the region prototype and the candidate pixel. The second is the distance between the candidate pixel and its nearest neighbor in the region. The inner vector product or vector angle is used as the similarity measure which makes both of these measures shading invariant. Results on a real image illustrate the effectiveness of the method.

  15. Optical color-image encryption in the diffractive-imaging scheme

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Pan, Qunna; Gong, Qiong

    2016-02-01

    By introducing the theta modulation technique into the diffractive-imaging-based optical scheme, we propose a novel approach for color image encryption. For encryption, a color image is divided into three channels, i.e., red, green and blue, and thereafter these components are appended by redundant data before being sent to the encryption scheme. The carefully designed optical setup, which comprises of three 4f optical architectures and a diffractive-imaging-based optical scheme, could encode the three plaintexts into a single noise-like intensity pattern. For the decryption, an iterative phase retrieval algorithm, together with a filter operation, is applied to extract the primary color images from the diffraction intensity map. Compared with previous methods, our proposal has successfully encrypted a color rather than grayscale image into a single intensity pattern, as a result of which the capacity and practicability have been remarkably enhanced. In addition, the performance and the security of it are also investigated. The validity as well as feasibility of the proposed method is supported by numerical simulations.

  16. Honolulu, Hawaii Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the city of Honolulu, Hawaii and adjacent areas on the island of Oahu. Honolulu lies on the south shore of the island, right of center of the image. Just below the center is Pearl Harbor, marked by several inlets and bays. Runways of the airport can be seen to the right of Pearl Harbor. Diamond Head, an extinct volcanic crater, is a blue circle along the coast right of center. The Koolau mountain range runs through the center of the image. The steep cliffs on the north side of the range are thought to be remnants of massive landslides that ripped apart the volcanic mountains that built the island thousands of years ago. On the north shore of the island are the Mokapu Peninsula and Kaneohe Bay. High resolution topographic data allow ecologists and planners to assess the effects of urban development on the sensitive ecosystems in tropical regions.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA

  17. San Gabriel Mountains, California, Radar image, color as height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationship of the urban area of Pasadena, California to the natural contours of the land. The image includes the alluvial plain on which Pasadena and the Jet Propulsion Laboratory sit, and the steep range of the San Gabriel Mountains. The mountain front and the arcuate valley running from upper left to the lower right are active fault zones, along which the mountains are rising. The chaparral-covered slopes above Pasadena are also a prime area for wildfires and mudslides. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities along the front of active mountain ranges.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to white at the highest elevations. This image contains about 2300 meters (7500 feet) of total relief. White speckles on the face of some of the mountains are holes in the data caused by steep terrain. These will be filled using coverage from an intersecting pass.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  18. Radar image with color as height, Bahia State, Brazil

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This radar image is the first to show the full 240-kilometer-wide (150 mile)swath collected by the Shuttle Radar Topography Mission (SRTM). The area shown is in the state of Bahia in Brazil. The semi-circular mountains along the leftside of the image are the Serra Da Jacobin, which rise to 1100 meters (3600 feet) above sea level. The total relief shown is approximately 800 meters (2600 feet). The top part of the image is the Sertao, a semi-arid region, that is subject to severe droughts during El Nino events. A small portion of the San Francisco River, the longest river (1609 kilometers or 1000 miles) entirely within Brazil, cuts across the upper right corner of the image. This river is a major source of water for irrigation and hydroelectric power. Mapping such regions will allow scientists to better understand the relationships between flooding cycles, drought and human influences on ecosystems.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. The three dark vertical stripes show the boundaries where four segments of the swath are merged to form the full scanned swath. These will be removed in later processing. Colors range from green at the lowest elevations to reddish at the highest elevations.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space

  19. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    NASA Astrophysics Data System (ADS)

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  20. A visible/infrared gray image fusion algorithm based on the YUV color transformation

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Jiakun; Li, Li

    2012-11-01

    Color fusion technology, one of the typical technologies, has been emphasized all over the world. Multiband images are fused into a color image. Some effective visible and thermal infrared color fusion algorithms have been proposed now. We have successfully run a real-time natural sense of visible/infrared color fusion algorithm in DSP and FPGA hardware processing platforms. However, according to different needs, gray image fusion technology has its own unique applications. Based on the natural sense of color image fusion algorithm of the visible and infrared, we have proposed a visible / infrared gray image fusion algorithm. Firstly we do a YUV color fusion. Then we output the brightness of the fusion as gray fusion images. This algorithm for image fusion is compared with typical fusion algorithms: the weighted average, the Laplace Pyramid and the Haar basis wavelet. Several objective evaluation indicators are selected. The results of objective and subjective comparison show that the algorithm has most advantages. It shows that multiband gray image fusion in the color space is available. The algorithm is implemented on a DSP hardware image processing platform real-time with the TI's chip as the kernel processor. It makes natural sense of color fusion and gray fusion for visible light (low level light) and thermal imaging integrated. Users are convenient to choose model of the natural sense of color fusion or gray fusion for real-time video imaging output

  1. Color constancy using 3D scene geometry derived from a single image.

    PubMed

    Elfiky, Noha; Gevers, Theo; Gijsenij, Arjan; Gonzalez, Jordi

    2014-09-01

    The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g., gray-world and white patch assumption). In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions (depth/layer) found in images. The aim is to classify images into stages (rough 3D geometry models). According to stage models, images are divided into stage regions using hard and soft segmentation. After that, the best color constancy methods are selected for each geometry depth. To this end, we propose a method to combine color constancy algorithms by investigating the relation between depth, local image statistics, and color constancy. Image statistics are then exploited per depth to select the proper color constancy method. Our approach opens the possibility to estimate multiple illuminations by distinguishing nearby light source from distant illuminations. Experiments on state-of-the-art data sets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 50% of median angular error. When using a perfect classifier (i.e, all of the test images are correctly classified into stages); the performance of the proposed method achieves an improvement of 52% of the median angular error compared with the best-performing single color constancy algorithm. PMID:25051548

  2. Adaptive Optics Retinal Imaging: Emerging Clinical Applications

    PubMed Central

    Godara, Pooja; Dubis, Adam M.; Roorda, Austin; Duncan, Jacque L.; Carroll, Joseph

    2010-01-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy (SLO) and spectral domain optical coherence tomography (SD-OCT) provide clinicians with remarkably clear pictures of the living retina. While the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, these same optics induce significant aberrations that in most cases obviate cellular-resolution imaging. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. Applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, RPE cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here we review some of the advances made possible with AO imaging of the human retina, and discuss applications and future prospects for clinical imaging. PMID:21057346

  3. Comparison of Color Model in Cotton Image Under Conditions of Natural Light

    NASA Astrophysics Data System (ADS)

    Zhang, J. H.; Kong, F. T.; Wu, J. Z.; Wang, S. W.; Liu, J. J.; Zhao, P.

    Although the color images contain a large amount of information reflecting the species characteristics, different color models also get different information. The selection of color models is the key to separating crops from background effectively and rapidly. Taking the cotton images collected under natural light as the object, we convert the color components of RGB color model, HSL color model and YIQ color model respectively. Then, we use subjective evaluation and objective evaluation methods, evaluating the 9 color components of conversion. It is concluded that the Q component of the soil, straw and plastic film region gray values remain the same without larger fluctuation when using subjective evaluation method. In the objective evaluation, we use the variance method, average gradient method, gray prediction objective evaluation error statistics method and information entropy method respectively to find the minimum numerical of Q color component suitable for background segmentation.

  4. Adaptive optics for directly imaging planetary systems

    NASA Astrophysics Data System (ADS)

    Bailey, Vanessa Perry

    In this dissertation I present the results from five papers (including one in preparation) on giant planets, brown dwarfs, and their environments, as well as on the commissioning and optimization of the Adaptive Optics system for the Large Binocular Telescope Interferometer. The first three Chapters cover direct imaging results on several distantly-orbiting planets and brown dwarf companions. The boundary between giant planets and brown dwarf companions in wide orbits is a blurry one. In Chapter 2, I use 3--5 mum imaging of several brown dwarf companions, combined with mid-infrared photometry for each system to constrain the circum-substellar disks around the brown dwarfs. I then use this information to discuss limits on scattering events versus in situ formation. In Chapters 3 and 4, I present results from an adaptive optics imaging survey for giant planets, where the target stars were selected based on the properties of their circumstellar debris disks. Specifically, we targeted systems with debris disks whose SEDs indicated gaps, clearings, or truncations; these features may possibly be sculpted by planets. I discuss in detail one planet-mass companion discovered as part of this survey, HD 106906 b. At a projected separation of 650 AU and weighing in at 11 Jupiter masses, a companion such as this is not a common outcome of any planet or binary star formation model. In the remaining three Chapters, I discuss pre-commissioning, on-sky results, and planned work on the Large Binocular Telescope Interferometer Adaptive Optics system. Before construction of the LBT AO system was complete, I tested a prototype of LBTI's pyramid wavefront sensor unit at the MMT with synthetically-generated calibration files. I present the methodology and MMT on-sky tests in Chapter 5. In Chapter 6, I present the commissioned performance of LBTIAO. Optical imperfections within LBTI limited the quality of the science images, and I describe a simple method to use the adaptive optics system

  5. A quaternion-based spectral clustering method for color image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Jin, Lianghai; Liu, Hong; He, Zeng

    2011-11-01

    Spectral clustering method has been widely used in image segmentation. A key issue in spectral clustering is how to build the affinity matrix. When it is applied to color image segmentation, most of the existing methods either use Euclidean metric to define the affinity matrix, or first converting color-images into gray-level images and then use the gray-level images to construct the affinity matrix (component-wise method). However, it is known that Euclidean distances can not represent the color differences well and the component-wise method does not consider the correlation between color channels. In this paper, we propose a new method to produce the affinity matrix, in which the color images are first represented in quaternion form and then the similarities between color pixels are measured by quaternion rotation (QR) mechanism. The experimental results show the superiority of the new method.

  6. Combining color and shape information for content-based image retrieval on the Internet

    NASA Astrophysics Data System (ADS)

    Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis

    2003-12-01

    We propose a new image feature that merges color and shape information. This global feature, which we call color shape context, is a histogram that combines the spatial (shape) and color information of the image in one compact representation. This histogram codes the locality of color transitions in an image. Illumination invariant derivatives are first computed and provide the edges of the image, which is the shape information of our feature. These edges are used to obtain similarity (rigid) invariant shape descriptors. The color transitions that take place on the edges are coded in an illumination invariant way and are used as the color information. The color and shape information are combined in one multidimensional vector. The matching function of this feature is a metric and allows for existing indexing methods such as R-trees to be used for fast and efficient retrieval.

  7. The effect of different standard illumination conditions on color balance failure in offset printed images on glossy coated paper expressed by color difference

    NASA Astrophysics Data System (ADS)

    Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.

    2012-05-01

    One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.

  8. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies.

    PubMed

    Supple, Megan A; Hines, Heather M; Dasmahapatra, Kanchon K; Lewis, James J; Nielsen, Dahlia M; Lavoie, Christine; Ray, David A; Salazar, Camilo; McMillan, W Owen; Counterman, Brian A

    2013-08-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations. PMID:23674305

  9. Color Mosaics and Multispectral Analyses of Mars Reconnaissance Orbit Mars Color Imager (MARCI) Observations

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Anderson, R. B.; Kressler, K.; Wolff, M. J.; Cantor, B.; Science; Operations Teams, M.

    2008-12-01

    The Mars Color Imager (MARCI) on the Mars Reconnaissance Orbiter (MRO) spacecraft is a is a wide-angle, multispectral Charge-Coupled Device (CCD) "push-frame" imaging camera designed to provide frequent, synoptic-scale imaging of Martian atmospheric and surface features and phenomena. MARCI uses a 1024x1024 pixel interline transfer CCD detector that has seven narrowband interference filters bonded directly to the CCD. Five of the filters are in the visible to short-wave near-IR wavelength range (MARCI-VIS: 437, 546, 604, 653, and 718 nm) and two are in the UV (MARCI-UV: 258 and 320 nm). During the MRO primary mission (November 2006 through November 2008), the instrument has acquired data swaths on the dayside of the planet, at an equator-crossing local solar time of about 3:00 p.m. We are analyzing the MARCI-VIS multispectral imaging data from the MRO primary mission in order to investigate (a) color variations in the surface and their potential relationship to variations in iron mineralogy; and (b) the time variability of surface albedo features at the approx. 1 km/pixel scale typical of MARCI nadir-pointed observations. Raw MARCI images were calibrated to radiance factor (I/F) using pre-flight and in-flight calibration files and a pipeline calibration process developed by the science team. We are using these calibrated MARCI files to generate map-projected mosaics of each of the 30 USGS standard quadrangles on Mars in each of the five MARCI-VIS bands. Our mosaicking software searches the MARCI data set to identify files that match a user- defined set of limits such as latitude, longitude, Ls, incidence angle, emission angle, and year. Each of the files matching the desired criteria is then map-projected and inserted in series into an output mosaic covering the desired lat/lon range. In cases of redundant coverage of the same pixels by different files, the user can set the program to use the pixel with the lowest I/F value for each individual MARCI-VIS band, thus

  10. Landsat ETM+ False-Color Image Mosaics of Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2007-01-01

    In 2005, the U.S. Agency for International Development and the U.S. Trade and Development Agency contracted with the U.S. Geological Survey to perform assessments of the natural resources within Afghanistan. The assessments concentrate on the resources that are related to the economic development of that country. Therefore, assessments were initiated in oil and gas, coal, mineral resources, water resources, and earthquake hazards. All of these assessments require geologic, structural, and topographic information throughout the country at a finer scale and better accuracy than that provided by the existing maps, which were published in the 1970's by the Russians and Germans. The very rugged terrain in Afghanistan, the large scale of these assessments, and the terrorist threat in Afghanistan indicated that the best approach to provide the preliminary assessments was to use remotely sensed, satellite image data, although this may also apply to subsequent phases of the assessments. Therefore, the first step in the assessment process was to produce satellite image mosaics of Afghanistan that would be useful for these assessments. This report discusses the production of the Landsat false-color image database produced for these assessments, which was produced from the calibrated Landsat ETM+ image mosaics described by Davis (2006).

  11. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2015-08-01

    Future direct-imaging exoplanet missions such as WFIRST/AFTA, Exo-C, and Exo-S will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These “cold” exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  12. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2016-01-01

    Future direct-imaging exoplanet missions such as WFIRST will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These "cold" exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  13. Radar Image with Color as Height, Old Khmer Road, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image shows the Old Khmer Road (Inrdratataka-Bakheng causeway) in Cambodia extending from the 9th Century A.D. capitol city of Hariharalaya in the lower right portion of the image to the later 10th Century AD capital of Yasodharapura. This was located in the vicinity of Phnom Bakheng (not shown in image). The Old Road is believed to be more than 1000 years old. Its precise role and destination within the 'new' city at Angkor is still being studied by archeologists. But wherever it ended, it not only offered an immense processional way for the King to move between old and new capitols, it also linked the two areas, widening the territorial base of the Khmer King. Finally, in the past and today, the Old Road managed the waters of the floodplain. It acted as a long barrage or dam for not only the natural streams of the area but also for the changes brought to the local hydrology by Khmer population growth.

    The image was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Image brightness is from the P-band (68 cm wavelength) radar backscatter, which is a measure of how much energy the surface reflects back towards the radar. Color is used to represent elevation contours. One cycle of color represents 20 m of elevation change, that is going from blue to red to yellow to green and back to blue again corresponds to 20 m of elevation change. Image dimensions are approximately 3.4 km by 3.5 km with a pixel spacing of 5 m. North is at top.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  14. Spatial-frequency-contingent color aftereffects: adaptation with two-dimensional stimulus patterns.

    PubMed

    Webster, W R; Day, R H; Gillies, O; Crassini, B

    1992-01-01

    The spatial-frequency theory of vision has been supported by adaptation studies using checkerboards in which contingent color aftereffects (CAEs) were produced at fundamental frequencies oriented at 45 degrees to the edges. A replication of this study failed to produce CAEs at the orientation of either the edges or the fundamentals. Using a computer-generated display, no CAEs were produced by adaptation of a square or an oblique checkerboard. But when one type of checkerboard (4 cpd) was adapted alone, CAEs were produced on the adapted checkerboard and on sine-wave gratings aligned with the fundamental and third harmonics of the checkerboard spectrum. Adaptation of a coarser checkerboard (0.80 cpd) produced CAEs aligned with both the edges and the harmonic frequencies. With checkerboards of both frequencies, CAEs were also found on the other type of checkerboard that had not been adapted. This observation raises problems for any edge-detector theory of vision, because there was no adaptation to edges. It was concluded that spatial-frequency mechanisms are operating at both low- and high-spatial frequencies and that an edge mechanism is operative at lower frequencies. The implications of these results are assessed for other theories of spatial vision. PMID:1549426

  15. Extreme Adaptive Optics Planet Imager: XAOPI

    SciTech Connect

    Macintosh, B A; Graham, J; Poyneer, L; Sommargren, G; Wilhelmsen, J; Gavel, D; Jones, S; Kalas, P; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Patience, J; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Troy, M; Wallace, K

    2003-09-17

    Ground based adaptive optics is a potentially powerful technique for direct imaging detection of extrasolar planets. Turbulence in the Earth's atmosphere imposes some fundamental limits, but the large size of ground-based telescopes compared to spacecraft can work to mitigate this. We are carrying out a design study for a dedicated ultra-high-contrast system, the eXtreme Adaptive Optics Planet Imager (XAOPI), which could be deployed on an 8-10m telescope in 2007. With a 4096-actuator MEMS deformable mirror it should achieve Strehl >0.9 in the near-IR. Using an innovative spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused by static errors. We predict that it will achieve contrast levels of 10{sup 7}-10{sup 8} at angular separations of 0.2-0.8 inches around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. We are constructing a high-contrast AO testbed to verify key concepts of our system, and present preliminary results here, showing an RMS wavefront error of <1.3 nm with a flat mirror.

  16. New Orleans Topography, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The city of New Orleans, situated on the southern shore of Lake Pontchartrain, is shown in this radar image from the Shuttle Radar Topography Mission (SRTM). In this image bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the SRTM mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is near the center of this scene, between the lake and the Mississippi River. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest overwater highway bridge. Major portions of the city of New Orleans are actually below sea level, and although it is protected by levees and sea walls that are designed to protect against storm surges of 18 to 20 feet, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface

  17. Filter-free image sensor pixels comprising silicon nanowires with selective color absorption.

    PubMed

    Park, Hyunsung; Dan, Yaping; Seo, Kwanyong; Yu, Young J; Duane, Peter K; Wober, Munib; Crozier, Kenneth B

    2014-01-01

    The organic dye filters of conventional color image sensors achieve the red/green/blue response needed for color imaging, but have disadvantages related to durability, low absorption coefficient, and fabrication complexity. Here, we report a new paradigm for color imaging based on all-silicon nanowire devices and no filters. We fabricate pixels consisting of vertical silicon nanowires with integrated photodetectors, demonstrate that their spectral sensitivities are governed by nanowire radius, and perform color imaging. Our approach is conceptually different from filter-based methods, as absorbed light is converted to photocurrent, ultimately presenting the opportunity for very high photon efficiency. PMID:24588103

  18. Vicarious calibration of the Geostationary Ocean Color Imager.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram; Oh, Im Sang

    2015-09-01

    Measurements of ocean color from Geostationary Ocean Color Imager (GOCI) with a moderate spatial resolution and a high temporal frequency demonstrate high value for a number of oceanographic applications. This study aims to propose and evaluate the calibration of GOCI as needed to achieve the level of radiometric accuracy desired for ocean color studies. Previous studies reported that the GOCI retrievals of normalized water-leaving radiances (nLw) are biased high for all visible bands due to the lack of vicarious calibration. The vicarious calibration approach described here relies on the assumed constant aerosol characteristics over the open-ocean sites to accurately estimate atmospheric radiances for the two near-infrared (NIR) bands. The vicarious calibration of visible bands is performed using in situ nLw measurements and the satellite-estimated atmospheric radiance using two NIR bands over the case-1 waters. Prior to this analysis, the in situ nLw spectra in the NIR are corrected by the spectrum optimization technique based on the NIR similarity spectrum assumption. The vicarious calibration gain factors derived for all GOCI bands (except 865nm) significantly improve agreement in retrieved remote-sensing reflectance (Rrs) relative to in situ measurements. These gain factors are independent of angular geometry and possible temporal variability. To further increase the confidence in the calibration gain factors, a large data set from shipboard measurements and AERONET-OC is used in the validation process. It is shown that the absolute percentage difference of the atmospheric correction results from the vicariously calibrated GOCI system is reduced by ~6.8%. PMID:26368426

  19. Using Color and Grayscale Images to Teach Histology to Color-Deficient Medical Students

    ERIC Educational Resources Information Center

    Rubin, Lindsay R.; Lackey, Wendy L.; Kennedy, Frances A.; Stephenson, Robert B.

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness") in the…

  20. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  1. [Color processing of ultrasonographic images in extracorporeal lithotripsy].

    PubMed

    Lardennois, B; Ziade, A; Walter, K

    1991-02-01

    A number of technical difficulties are encountered in the ultrasonographic detection of renal stones which unfortunately limit its performance. The margin of error of firing in extracorporeal shock-wave lithotripsy (ESWL) must be reduced to a minimum. The role of the ultrasonographic monitoring during lithotripsy is also essential: continuous control of the focussing of the short-wave beamand assessment if the quality of fragmentation. The authors propose to improve ultrasonographic imaging in ESWL by means of intraoperative colour processing of the stone. Each shot must be directed to its target with an economy of vision avoiding excessive fatigue. The principle of the technique consists of digitalization of the ultrasound video images using a Macintosh Mac 2 computer. The Graphis Paint II program is interfaced directly with the Quick Capture card and recovers the images on its work surface in real time. The program is then able to attribute to each of these 256 shades of grey any one of the 16.6 million colours of the Macintosh universe with specific intensity and saturation. During fragmentation, using the principle of a palette, the stone changes colour from green to red indicating complete fragmentation. A Color Space card converts the digital image obtained into a video analogue source which is visualized on the monitor. It can be superimposed and/or juxtaposed with the source image by means of a multi-standard mixing table. Colour processing of ultrasonographic images in extracoporeal shockwave lithotripsy allows better visualization of the stones and better follow-up of fragmentation and allows the shockwave treatment to be stopped earlier. It increases the stone-free performance at 6 months. This configuration will eventually be able to integrate into the ultrasound apparatus itself. PMID:1364639

  2. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  3. Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images

    SciTech Connect

    Kimmel, R.; Malladi, R.; Sochen, N.

    1997-02-01

    A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as a surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.

  4. Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Zhou, Dongming; Yao, Shaowen; Nie, Rencan; Yu, Chuanbo; Ding, Tingting

    2016-04-01

    In CIELab color space, we propose a remote sensing image fusion technique based on nonsubsampled shearlet transform (NSST) and pulse coupled neural network (PCNN), which aim to improve the efficiency and performance of the remote sensing image fusion by combining the excellent properties of the two methods. First, panchromatic (PAN) and multispectral (MS) are transformed into CIELab color space to get different color components. Second, PAN and L component of MS are decomposed by the NSST to obtain corresponding the low-frequency coefficients and high-frequency coefficients. Third, the low-frequency coefficients are fused by intersecting cortical model (ICM); the high-frequency coefficients are divided into several sub-blocks to calculate the average gradient (AG), and the linking strength β of PCNN model is determined by the AG, so that the parameters β can be adaptively set according to the quality of the sub-block images, then the sub-blocks image are input into PCNN to get the oscillation frequency graph (OFG), the method can get the fused high-frequency coefficients according to the OFG. Finally, the fused L component is obtained by inverse NSST, and the fused RGB color image is obtained through inverse CIELab transform. The experimental results demonstrate that the proposed method provide better effect compared with other common methods.

  5. Single camera imaging system for color and near-infrared fluorescence image guided surgery

    PubMed Central

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-01-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 μM can be detected when the excitation irradiance is 3.92 mW/cm2 at an exposure time of 10 ms. PMID:25136502

  6. Toward a unified color space for perception-based image processing.

    PubMed

    Lissner, Ingmar; Urban, Philipp

    2012-03-01

    Image processing methods that utilize characteristics of the human visual system require color spaces with certain properties to operate effectively. After analyzing different types of perception-based image processing problems, we present a list of properties that a unified color space should have. Due to contradictory perceptual phenomena and geometric issues, a color space cannot incorporate all these properties. We therefore identify the most important properties and focus on creating opponent color spaces without cross contamination between color attributes (i.e., lightness, chroma, and hue) and with maximum perceptual uniformity induced by color-difference formulas. Color lookup tables define simple transformations from an initial color space to the new spaces. We calculate such tables using multigrid optimization considering the Hung and Berns data of constant perceived hue and the CMC, CIE94, and CIEDE2000 color-difference formulas. The resulting color spaces exhibit low cross contamination between color attributes and are only slightly less perceptually uniform than spaces optimized exclusively for perceptual uniformity. We compare the CIEDE2000-based space with commonly used color spaces in two examples of perception-based image processing. In both cases, standard methods show improved results if the new space is used. All color-space transformations and examples are provided as MATLAB codes on our website. PMID:21824846

  7. An investigation on the intra-sample distribution of cotton color by using image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  8. Comparison of color image segmentations for lane following

    NASA Astrophysics Data System (ADS)

    Sandt, Frederic; Aubert, Didier

    1993-05-01

    For ten years, unstructured road following has been the subject of many studies. Road following must support the automatic navigation, at reasonable speed, of mobile robots on irregular paths and roads, with unhomogeneous surfaces and under variable lighting conditions. Civil and military applications of this technology include transportation, logistics, security and engineering. The definition of our lane following system requires an evaluation of the existing technologies. Although the various operational systems converge on a color perception and a region segmentation optimizing discrimination and stability respectively, the treatments and performances vary. In this paper, the robustness of four operational systems and two connected techniques are compared according to common evaluation criteria. We identify typical situations which constitute a basis for the realization of an image database. We describe the process of experimentation conceived for the comparative analysis of performances. The analytical results are useful in order to infer a few optimal combinations of techniques driven by the situations, and to define the present limits of the color perception's validity.

  9. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  10. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging.

    PubMed

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-12-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  11. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging

    PubMed Central

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-01-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  12. [Image Feature Extraction and Discriminant Analysis of Xinjiang Uygur Medicine Based on Color Histogram].

    PubMed

    Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat

    2015-06-01

    Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine. PMID:26485983

  13. Biological versus electronic adaptive coloration: how can one inform the other?

    PubMed Central

    Kreit, Eric; Mäthger, Lydia M.; Hanlon, Roger T.; Dennis, Patrick B.; Naik, Rajesh R.; Forsythe, Eric; Heikenfeld, Jason

    2013-01-01

    Adaptive reflective surfaces have been a challenge for both electronic paper (e-paper) and biological organisms. Multiple colours, contrast, polarization, reflectance, diffusivity and texture must all be controlled simultaneously without optical losses in order to fully replicate the appearance of natural surfaces and vividly communicate information. This review merges the frontiers of knowledge for both biological adaptive coloration, with a focus on cephalopods, and synthetic reflective e-paper within a consistent framework of scientific metrics. Currently, the highest performance approach for both nature and technology uses colourant transposition. Three outcomes are envisioned from this review: reflective display engineers may gain new insights from millions of years of natural selection and evolution; biologists will benefit from understanding the types of mechanisms, characterization and metrics used in synthetic reflective e-paper; all scientists will gain a clearer picture of the long-term prospects for capabilities such as adaptive concealment and signalling. PMID:23015522

  14. Color Index Imaging of the Stellar Stream Around NGC 5907

    NASA Astrophysics Data System (ADS)

    Laine, Seppo; Grillmair, Carl J.; Martinez-Delgado, David; Romanowsky, Aaron J.; Capak, Peter; Arendt, Richard G.; Ashby, Matthew; Davies, James E.; Majewski, Steven R.; GaBany, R. Jay

    2015-01-01

    We have obtained deep g, r, and i-band Subaru and ultra-deep 3.6 micron Spitzer/IRAC images of parts of the stellar stream around the nearby edge-on disk galaxy NGC 5907. We report on the color index distribution of the resolved emission along the stream, and indicators of recent star formation associated with the stream. We present scenarios regarding the nature of the disrupted satellite galaxy, based on our data. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work is based in part on data collected with the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. Support for this work was provided by NASA through an award issued by JPL/Caltech.

  15. Color Index Imaging of the Stellar Stream Around NGC 5907

    NASA Astrophysics Data System (ADS)

    Laine, Seppo; Grillmair, Carl J.; Martinez-Delgado, David; Romanowsky, Aaron; Capak, Peter; Arendt, Richard G.; Ashby, M. L. N.; Davies, James; Majewski, Steven; GaBany, R. Jay

    2015-08-01

    We have obtained deep g, r, and i-band Subaru and ultra-deep 3.6 micron Spitzer/IRAC images of parts of the spectacular, multiply-looped stellar stream around the nearby edge-on disk galaxy NGC 5907. We report on the color index distribution of the integrated starlight and the derived stellar populations along the stream. We present scenarios regarding the nature of the disrupted satellite galaxy, based on our data. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work is based in part on data collected with the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. Support for this work was provided by NASA through an award issued by JPL/Caltech.

  16. Cloud screening Coastal Zone Color Scanner images using channel 5

    NASA Technical Reports Server (NTRS)

    Eckstein, B. A.; Simpson, J. J.

    1991-01-01

    Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.

  17. Lifting-based reversible color transformations for image compression

    NASA Astrophysics Data System (ADS)

    Malvar, Henrique S.; Sullivan, Gary J.; Srinivasan, Sridhar

    2008-08-01

    This paper reviews a set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic. The YCoCg transform and its reversible form YCoCg-R can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity. We also present extensions of the YCoCg transform for four-channel CMYK pixel data. Thanks to their reversibility under integer arithmetic, these transforms are useful for both lossy and lossless compression. Versions of these transforms are used in the HD Photo image coding technology (which is the basis for the upcoming JPEG XR standard) and in recent editions of the H.264/MPEG-4 AVC video coding standard.

  18. Visualization of multivariate image data using image fusion and perceptually optimized color scales based on sRGB

    NASA Astrophysics Data System (ADS)

    Saalbach, Axel; Twellmann, Thorsten; Nattkemper, Tim; White, Mark; Khazen, Michael; Leach, Martin O.

    2004-05-01

    Due to the rapid progress in medical imaging technology, analysis of multivariate image data is receiving increased interest. However, their visual exploration is a challenging task since it requires the integration of information from many different sources which usually cannot be perceived at once by an observer. Image fusion techniques are commonly used to obtain information from multivariate image data, while psychophysical aspects of data visualization are usually not considered. Visualization is typically achieved by means of device derived color scales. With respect to psychophysical aspects of visualization, more sophisticated color mapping techniques based on device independent (and perceptually uniform) color spaces like CIELUV have been proposed. Nevertheless, the benefit of these techniques is limited by the fact that they require complex color space transformations to account for device characteristics and viewing conditions. In this paper we present a new framework for the visualization of multivariate image data using image fusion and color mapping techniques. In order to overcome problems of consistent image presentations and color space transformations, we propose perceptually optimized color scales based on CIELUV in combination with sRGB (IEC 61966-2-1) color specification. In contrast to color definitions based purely on CIELUV, sRGB data can be used directly under reasonable conditions, without complex transformations and additional information. In the experimental section we demonstrate the advantages of our approach in an application of these techniques to the visualization of DCE-MRI images from breast cancer research.

  19. Image mosaicking based on feature points using color-invariant values

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  20. Application of digital color image analysis for colorimetric quality evaluation of surface defects on paint coatings

    NASA Astrophysics Data System (ADS)

    Steckert, Carsten; Witt, Klaus

    2000-12-01

    A method for the quality management of paint producers was developed that allows for an objective description of inhomogeneous fading of paint coatings after free weathering using relevant metric quantities such as color contrast, gradient of color contrast, and geometric features of the inhomogeneous structures. These may be quantified with the method of digital color image analysis. The first step to apply this technique means a systematic investigation of the color transformation properties specific of the selected input/output devices used for digital imaging. To build a color management system mathematical models of the color transformation processes were optimized and embedded in a commercial color image analysis software. The needed metric parameters, that evaluate the damages on the coated surfaces, must be deduced for highest possible agreement with visual judgements of experts on the categorization of the damages. 150 samples of paint coatings after weathering were selected to investigate this correlation.

  1. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. PMID:24976104

  2. Best Color Image of Jupiter's Little Red Spot

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This amazing color portrait of Jupiter's 'Little Red Spot' (LRS) combines high-resolution images from the New Horizons Long Range Reconnaissance Imager (LORRI), taken at 03:12 UT on February 27, 2007, with color images taken nearly simultaneously by the Wide Field Planetary Camera 2 (WFPC2) on the Hubble Space Telescope. The LORRI images provide details as fine as 9 miles across (15 kilometers), which is approximately 10 times better than Hubble can provide on its own. The improved resolution is possible because New Horizons was only 1.9 million miles (3 million kilometers) away from Jupiter when LORRI snapped its pictures, while Hubble was more than 500 million miles (800 million kilometers) away from the Gas Giant planet.

    The Little Red Spot is the second largest storm on Jupiter, roughly 70% the size of the Earth, and it started turning red in late-2005. The clouds in the Little Red Spot rotate counterclockwise, or in the anticyclonic direction, because it is a high-pressure region. In that sense, the Little Red Spot is the opposite of a hurricane on Earth, which is a low-pressure region - and, of course, the Little Red Spot is far larger than any hurricane on Earth.

    Scientists don't know exactly how or why the Little Red Spot turned red, though they speculate that the change could stem from a surge of exotic compounds from deep within Jupiter, caused by an intensification of the storm system. In particular, sulfur-bearing cloud droplets might have been propelled about 50 kilometers into the upper level of ammonia clouds, where brighter sunlight bathing the cloud tops released the red-hued sulfur embedded in the droplets, causing the storm to turn red. A similar mechanism has been proposed for the Little Red Spot's 'older brother,' the Great Red Spot, a massive energetic storm system that has persisted for over a century.

    New Horizons is providing an opportunity to examine an 'infant' red storm system in detail, which may help scientists

  3. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds

    PubMed Central

    Hillis, James M.; Brainard, David H.

    2007-01-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism. PMID:16277280

  4. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  5. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  6. Adaptive Optics Imaging and Spectroscopy of Neptune

    NASA Technical Reports Server (NTRS)

    Johnson, Lindley (Technical Monitor); Sromovsky, Lawrence A.

    2005-01-01

    OBJECTIVES: We proposed to use high spectral resolution imaging and spectroscopy of Neptune in visible and near-IR spectral ranges to advance our understanding of Neptune s cloud structure. We intended to use the adaptive optics (AO) system at Mt. Wilson at visible wavelengths to try to obtain the first groundbased observations of dark spots on Neptune; we intended to use A 0 observations at the IRTF to obtain near-IR R=2000 spatially resolved spectra and near-IR A0 observations at the Keck observatory to obtain the highest spatial resolution studies of cloud feature dynamics and atmospheric motions. Vertical structure of cloud features was to be inferred from the wavelength dependent absorption of methane and hydrogen,

  7. On independent color space transformations for the compression of CMYK images.

    PubMed

    de Queiroz, R L

    1999-01-01

    Device and image-independent color space transformations for the compression of CMYK images were studied. A new transformation (to a YYCC color space) was developed and compared to known ones. Several tests were conducted leading to interesting conclusions. Among them, color transformations are not always advantageous over independent compression of CMYK color planes. Another interesting conclusion is that chrominance subsampling is rarely advantageous in this context. Also, it is shown that transformation to YYCC consistently outperforms the transformation to YCbCrK, while being competitive with the image-dependent KLT-based approach. PMID:18267416

  8. Probing the functions of contextual modulation by adapting images rather than observers.

    PubMed

    Webster, Michael A

    2014-11-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron's sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be "pre-adapted" to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  9. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  10. Radar Image with Color as Height, Hariharalaya, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color--from blue to red to yellow to green and back to blue again--represents 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  11. Radar Image with Color as Height, Nokor Pheas Trapeng, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Nokor Pheas Trapeng is the name of the large black rectangular feature in the center-bottom of this image, acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Its Khmer name translates as 'Tank of the City of Refuge'. The immense tank is a typical structure built by the Khmer for water storage and control, but its size is unusually large. This suggests, as does 'city' in its name, that in ancient times this area was far more prosperous than today.

    A visit to this remote, inaccessible site was made in December 1998. The huge water tank was hardly visible. From the radar data we knew that the tank stretched some 500 meters (1,640 feet) from east to west. However, between all the plants growing on the surface of the water and the trees and other vegetation in the area, the water tank blended with the surrounding topography. Among the vegetation, on the northeast of the tank, were remains of an ancient temple and a spirit shrine. So although far from the temples of Angkor, to the southeast, the ancient water structure is still venerated by the local people.

    The image covers an area approximately 9.5 by 8.7 kilometers (5.9 by 5.4 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going from blue to red to yellow to green and back to blue again corresponds to 20 meters (65.6 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate

  12. Radar Image with Color as Height, Sman Teng, Temple, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Cambodia's Angkor region, taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR), reveals a temple (upper-right) not depicted on early 19th Century French archeological survey maps and American topographic maps. The temple, known as 'Sman Teng,' was known to the local Khmer people, but had remained unknown to historians due to the remoteness of its location. The temple is thought to date to the 11th Century: the heyday of Angkor. It is an important indicator of the strategic and natural resource contributions of the area northwest of the capitol, to the urban center of Angkor. Sman Teng, the name designating one of the many types of rice enjoyed by the Khmer, was 'discovered' by a scientist at NASA's Jet Propulsion Laboratory, Pasadena, Calif., working in collaboration with an archaeological expert on the Angkor region. Analysis of this remote area was a true collaboration of archaeology and technology. Locating the temple of Sman Teng required the skills of scientists trained to spot the types of topographic anomalies that only radar can reveal.

    This image, with a pixel spacing of 5 meters (16.4 feet), depicts an area of approximately 5 by 4.7 kilometers (3.1 by 2.9 miles). North is at top. Image brightness is from the P-band (68 centimeters, or 26.8 inches) wavelength radar backscatter, a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 25 meters (82 feet) of elevation change, so going from blue to red to yellow to green and back to blue again corresponds to 25 meters (82 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data

  13. 32-megapixel dual-color CCD imaging system

    NASA Astrophysics Data System (ADS)

    Stubbs, Christopher W.; Marshall, Stuart; Cook, Kenneth H.; Hills, Robert F.; Noonan, Joseph; Akerlof, Carl W.; Alcock, Charles R.; Axelrod, Timothy S.; Bennett, D.; Dagley, K.; Freeman, K. C.; Griest, Kim; Park, Hye-Sook; Perlmutter, Saul; Peterson, Bruce A.; Quinn, Peter J.; Rodgers, A. W.; Sosin, C.; Sutherland, W. J.

    1993-07-01

    We have developed an astronomical imaging system that incorporates a total of eight 2048 X 2048 pixel CCDs into two focal planes, to allow simultaneous imaging in two colors. Each focal plane comprises four 'edge-buttable' detector arrays, on custom Kovar mounts. The clocking and bias voltage levels for each CCD are independently adjustable, but all the CCDs are operated synchronously. The sixteen analog outputs (two per chip) are measured at 16 bits with commercially available correlated double sampling A/D converters. The resulting 74 MBytes of data per frame are transferred over fiber optic links into dual-ported VME memory. The total readout time is just over one minute. We obtain read noise ranging from 6.5 e- to 10 e- for the various channels when digitizing at 34 Kpixels/sec, with full well depths (MPP mode) of approximately 100,000 e- per 15 micrometers X 15 micrometers pixel. This instrument is currently being used in a search of gravitational microlensing from compact objects in our Galactic halo, using the newly refurbished 1.3 m telescope at the Mt. Stromlo Observatory, Australia.

  14. A New Human Perception-Based Over-Exposure Detection Method for Color Images

    PubMed Central

    Yoon, Yeo-Jin; Byun, Keun-Yung; Lee, Dae-Hong; Jung, Seung-Won; Ko, Sung-Jea

    2014-01-01

    To correct an over-exposure within an image, the over-exposed region (OER) must first be detected. Detecting the OER accurately has a significant effect on the performance of the over-exposure correction. However, the results of conventional OER detection methods, which generally use the brightness and color information of each pixel, often deviate from the actual OER perceived by the human eye. To overcome this problem, in this paper, we propose a novel method for detecting the perceived OER more accurately. Based on the observation that recognizing the OER in an image is dependent on the saturation sensitivity of the human visual system (HVS), we detect the OER by thresholding the saturation value of each pixel. Here, a function of the proposed method, which is designed based on the results of a subjective evaluation on the saturation sensitivity of the HVS, adaptively determines the saturation threshold value using the color and the perceived brightness of each pixel. Experimental results demonstrate that the proposed method accurately detects the perceived OER, and furthermore, the over-exposure correction can be improved by adopting the proposed OER detection method. PMID:25225876

  15. Hue-preserving local contrast enhancement and illumination compensation for outdoor color images

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David; Christnacher, Frank

    2015-10-01

    Real-time applications in the field of security and defense use dynamic color camera systems to gain a better understanding of outdoor scenes. To enhance details and improve the visibility in images it is required to per- form local image processing, and to reduce lightness and color inconsistencies between images acquired under different illumination conditions it is required to compensate illumination effects. We introduce an automatic hue-preserving local contrast enhancement and illumination compensation approach for outdoor color images. Our approach is based on a shadow-weighted intensity-based Retinex model which enhances details and compensates the illumination effect on the lightness of an image. The Retinex model exploits information from a shadow detection approach to reduce lightness halo artifacts on shadow boundaries. We employ a hue-preserving color transformation to obtain a color image based on the original color information. To reduce color inconsistencies between images acquired under different illumination conditions we process the saturation using a scaling function. The approach has been successfully applied to static and dynamic color image sequences of outdoor scenes and an experimental comparison with previous Retinex-based approaches has been carried out.

  16. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  17. Going Beyond RGB: How to Create Color Composite Images that Convey the Science

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Z. G.; Frattare, L. M.; English, J.; Pu'uohau-Pummill, K.

    2010-01-01

    The quality of modern astronomical data and the agility of current image-processing software enable new ways to visualize data as images. Two developments in particular have led to a fundamental change in how astronomical images may be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical datasets to be combined into a color composite image. Furthermore, any color may be assigned to each dataset, not just red, green or blue. With this technique, images with as many as eight datasets have been produced. Each dataset is intensity scaled and colorized independently, creating an immense parameter space that may be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. I will demonstrate how color composite images can be assembled in Photoshow and GIMP. I will also give examples of how color can be effectively used to convey the science of interest.

  18. Radar Image with Color as Height, Lovea, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Lovea, Cambodia, was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Lovea, the roughly circular feature in the middle-right of the image, rises some 5 meters (16.4 feet) above the surrounding terrain. Lovea is larger than many of the other mound sites with a diameter of greater than 300 meters (984.3 feet). However, it is one of a number highlighted by the radar imagery. The present-day village of Lovea does not occupy all of the elevated area. However, at the center of the mound is an ancient spirit post honoring the legendary founder of the village. The mound is surrounded by earthworks and has vestiges of additional curvilinear features. Today, as in the past, these harnessed water during the rainy season, and conserved it during the long dry months of the year.

    The village of Lovea located on the mound was established in pre-Khmer times, probably before 500 A.D. In the lower left portion of the image is a large trapeng and square moat. These are good examples of construction during the historical 9th to 14th Century A.D. Khmer period; construction that honored and protected earlier circular villages. This suggests a cultural and technical continuity between prehistoric circular villages and the immense urban site of Angkor. This connection is one of the significant finds generated by NASA's radar imaging of Angkor. It shows that the city of Angkor was a particularly Khmer construction. The temple forms and water management structures of Angkor were the result of pre-existing Khmer beliefs and methods of water management.

    Image dimensions are approximately 6.3 by 4.7 kilometers (3.9 by 2.9 miles). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going

  19. Color imaging of Mars by the High Resolution Imaging Science Experiment (HiRISE)

    USGS Publications Warehouse

    Delamere, W.A.; Tornabene, L.L.; McEwen, A.S.; Becker, K.; Bergstrom, J.W.; Bridges, N.T.; Eliason, E.M.; Gallagher, D.; Herkenhoff, K. E.; Keszthelyi, L.; Mattson, S.; McArthur, G.K.; Mellon, M.T.; Milazzo, M.; Russell, P.S.; Thomas, N.

    2010-01-01

    HiRISE has been producing a large number of scientifically useful color products of Mars and other planetary objects. The three broad spectral bands, coupled with the highly sensitive 14 bit detectors and time delay integration, enable detection of subtle color differences. The very high spatial resolution of HiRISE can augment the mineralogic interpretations based on multispectral (THEMIS) and hyperspectral datasets (TES, OMEGA and CRISM) and thereby enable detailed geologic and stratigraphic interpretations at meter scales. In addition to providing some examples of color images and their interpretation, we describe the processing techniques used to produce them and note some of the minor artifacts in the output. We also provide an example of how HiRISE color products can be effectively used to expand mineral and lithologic mapping provided by CRISM data products that are backed by other spectral datasets. The utility of high quality color data for understanding geologic processes on Mars has been one of the major successes of HiRISE. ?? 2009 Elsevier Inc.

  20. Color Doppler imaging of the retrobulbar vessels in diabetic retinopathy

    PubMed Central

    Walasik-Szemplińska, Dorota

    2014-01-01

    Diabetes is a metabolic disease characterized by elevated blood glucose level due to impaired insulin secretion and activity. Chronic hyperglycemia leads to functional disorders of numerous organs and to their damage. Vascular lesions belong to the most common late complications of diabetes. Microangiopathic lesions can be found in the eyeball, kidneys and nervous system. Macroangiopathy is associated with coronary and peripheral vessels. Diabetic retinopathy is the most common microangiopathic complication characterized by closure of slight retinal blood vessels and their permeability. Despite intensive research, the pathomechanism that leads to the development and progression of diabetic retinopathy is not fully understood. The examinations used in assessing diabetic retinopathy usually involve imaging of the vessels in the eyeball and the retina. Therefore, the examinations include: fluorescein angiography, optical coherence tomography of the retina, B-mode ultrasound imaging, perimetry and digital retinal photography. There are many papers that discuss the correlations between retrobulbar circulation alterations and progression of diabetic retinopathy based on Doppler sonography. Color Doppler imaging is a non-invasive method enabling measurements of blood flow velocities in small vessels of the eyeball. The most frequently assessed vessels include: the ophthalmic artery, which is the first branch of the internal carotid artery, as well as the central retinal vein and artery, and the posterior ciliary arteries. The analysis of hemodynamic alterations in the retrobulbar vessels may deliver important information concerning circulation in diabetes and help to answer the question whether there is a relation between the progression of diabetic retinopathy and the changes observed in blood flow in the vessels of the eyeball. This paper presents the overview of literature regarding studies on blood flow in the vessels of the eyeball in patients with diabetic

  1. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators

    PubMed Central

    Chiao, Chuan-Chin; Wickiser, J. Kenneth; Allen, Justine J.; Genter, Brock; Hanlon, Roger T.

    2011-01-01

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision. PMID:21576487

  2. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators.

    PubMed

    Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T

    2011-05-31

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision. PMID:21576487

  3. Brightness, lightness, and specifying color in high-dynamic-range scenes and images

    NASA Astrophysics Data System (ADS)

    Fairchild, Mark D.; Chen, Ping-Hsu

    2011-01-01

    Traditional color spaces have been widely used in a variety of applications including digital color imaging, color image quality, and color management. These spaces, however, were designed for the domain of color stimuli typically encountered with reflecting objects and image displays of such objects. This means the domain of stimuli with luminance levels from slightly above zero to that of a perfect diffuse white (or display white point). This limits the applicability of such spaces to color problems in HDR imaging. This is caused by their hard intercepts at zero luminance/lightness and by their uncertain applicability for colors brighter than diffuse white. To address HDR applications, two new color spaces were recently proposed, hdr-CIELAB and hdr-IPT. They are based on replacing the power-function nonlinearities in CIELAB and IPT with more physiologically plausible hyperbolic functions optimized to most closely simulate the original color spaces in the diffuse reflecting color domain. This paper presents the formulation of the new models, evaluations using Munsell data in comparison with CIELAB, IPT, and CIECAM02, two sets of lightness-scaling data above diffuse white, and various possible formulations of hdr-CIELAB and hdr-IPT to predict the visual results.

  4. Probing the functions of contextual modulation by adapting images rather than observers

    PubMed Central

    Webster, Michael A.

    2014-01-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron’s sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be “pre-adapted” to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  5. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  6. Plasmonics-Based Multifunctional Electrodes for Low-Power-Consumption Compact Color-Image Sensors.

    PubMed

    Lin, Keng-Te; Chen, Hsuen-Li; Lai, Yu-Sheng; Chi, Yi-Min; Chu, Ting-Wei

    2016-03-01

    High pixel density, efficient color splitting, a compact structure, superior quantum efficiency, and low power consumption are all important features for contemporary color-image sensors. In this study, we developed a surface plasmonics-based color-image sensor displaying a high photoelectric response, a microlens-free structure, and a zero-bias working voltage. Our compact sensor comprised only (i) a multifunctional electrode based on a single-layer structured aluminum (Al) film and (ii) an underlying silicon (Si) substrate. This approach significantly simplifies the device structure and fabrication processes; for example, the red, green, and blue color pixels can be prepared simultaneously in a single lithography step. Moreover, such Schottky-based plasmonic electrodes perform multiple functions, including color splitting, optical-to-electrical signal conversion, and photogenerated carrier collection for color-image detection. Our multifunctional, electrode-based device could also avoid the interference phenomenon that degrades the color-splitting spectra found in conventional color-image sensors. Furthermore, the device took advantage of the near-field surface plasmonic effect around the Al-Si junction to enhance the optical absorption of Si, resulting in a significant photoelectric current output even under low-light surroundings and zero bias voltage. These plasmonic Schottky-based color-image devices could convert a photocurrent directly into a photovoltage and provided sufficient voltage output for color-image detection even under a light intensity of only several femtowatts per square micrometer. Unlike conventional color image devices, using voltage as the output signal decreases the area of the periphery read-out circuit because it does not require a current-to-voltage conversion capacitor or its related circuit. Therefore, this strategy has great potential for direct integration with complementary metal-oxide-semiconductor (CMOS)-compatible circuit

  7. Color Image of Death Valley, California from SIR-C

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This radar image shows the area of Death Valley, California and the different surface types in the area. Radar is sensitive to surface roughness with rough areas showing up brighter than smooth areas, which appear dark. This is seen in the contrast between the bright mountains that surround the dark, smooth basins and valleys of Death Valley. The image shows Furnace Creek alluvial fan (green crescent feature) at the far right, and the sand dunes near Stove Pipe Wells at the center. Alluvial fans are gravel deposits that wash down from the mountains over time. Several other alluvial fans (semicircular features) can be seen along the mountain fronts in this image. The dark wrench-shaped feature between Furnace Creek fan and the dunes is a smooth flood-plain which encloses Cottonball Basin. Elevations in the valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using these radar data to help answer a number of different questions about Earth's geology including how alluvial fans form and change through time in response to climatic changes and earthquakes. The image is centered at 36.629 degrees north latitude, 117.069 degrees west longitude. Colors in the image represent different radar channels as follows: red =L-band horizontally polarized transmitted, horizontally polarized received (LHH); green =L-band horizontally transmitted, vertically received (LHV) and blue = CHV.

    SIR-C/X-SAR is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground

  8. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  9. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  10. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  11. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  12. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases. PMID:27295675

  13. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  14. JPEG 2000 coding of image data over adaptive refinement grids

    NASA Astrophysics Data System (ADS)

    Gamito, Manuel N.; Dias, Miguel S.

    2003-06-01

    An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.

  15. Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.

    PubMed

    Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir

    2016-08-01

    Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis. PMID:27164577

  16. Multi-color imaging of selected southern interacting galaxies

    NASA Technical Reports Server (NTRS)

    Smith, Eric P.; Hintzen, Paul

    1990-01-01

    The authors present preliminary results from a study of selected Arp-Madore Southern Hemisphere peculiar galaxies. Broadband charge coupled device (CCD) images (BVRI) of a subset of these galaxies allow us to study each galaxy's optical morphology, color, and (in a crude manner) degree of nuclear activity, and to compare them with similar data we possess on other active galaxies. Many of these galaxies have optical morphologies closely resembling those of powerful radio galaxies (Smith and Heckman 1989), yet their radio emission is unremarkable. Accurate positions for subsequent spectroscopic studies have been determined along with broad band photometry and morphology studies. Detailed observations of these comparatively bright, low-redshift, well-resolved interacting systems should aid our understanding of the role interactions play in triggering galaxy activity. This work is the initial effort in a long term project to study the role played by the dynamics of the interaction in the production and manifestations of activity in galaxies, and the frequency of galaxy mergers.

  17. Private anonymous fingerprinting for color images in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Abdul, W.; Gaborit, P.; Carré, P.

    2010-01-01

    An online buyer of multimedia content does not want to reveal his identity or his choice of multimedia content whereas the seller or owner of the content does not want the buyer to further distribute the content illegally. To address these issues we present a new private anonymous fingerprinting protocol. It is based on superposed sending for communication security, group signature for anonymity and traceability and single database private information retrieval (PIR) to allow the user to get an element of the database without giving any information about the acquired element. In the presence of a semi-honest model, the protocol is implemented using a blind, wavelet based color image watermarking scheme. The main advantage of the proposed protocol is that both the user identity and the acquired database element are unknown to any third party and in the case of piracy, the pirate can be identified using the group signature scheme. The robustness of the watermarking scheme against Additive White Gaussian Noise is also shown.

  18. Application of the airborne ocean color imager for commercial fishing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.

    1993-01-01

    The objective of the investigation was to develop a commercial remote sensing system for providing near-real-time data (within one day) in support of commercial fishing operations. The Airborne Ocean Color Imager (AOCI) had been built for NASA by Daedalus Enterprises, Inc., but it needed certain improvements, data processing software, and a delivery system to make it into a commercial system for fisheries. Two products were developed to support this effort: the AOCI with its associated processing system and an information service for both commercial and recreational fisheries to be created by Spectro Scan, Inc. The investigation achieved all technical objectives: improving the AOCI, creating software for atmospheric correction and bio-optical output products, georeferencing the output products, and creating a delivery system to get those products into the hands of commercial and recreational fishermen in near-real-time. The first set of business objectives involved Daedalus Enterprises and also were achieved: they have an improved AOCI and new data processing software with a set of example data products for fisheries applications to show their customers. Daedalus' marketing activities showed the need for simplification of the product for fisheries, but they successfully marketed the current version to an Italian consortium. The second set of business objectives tasked Spectro Scan to provide an information service and they could not be achieved because Spectro Scan was unable to obtain necessary venture capital to start up operations.

  19. Voyager 2 Color Image of Enceladus, Almost Full Disk

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This color Voyager 2 image mosaic shows the water-ice-covered surface of Enceladus, one of Saturn's icy moons. Enceladus' diameter of just 500 km would fit across the state of Arizona, yet despite its small size Enceladus exhibits one of the most interesting surfaces of all the icy satellites. Enceladus reflects about 90% of the incident sunlight (about like fresh-fallen snow), placing it among the most reflective objects in the Solar System. Several geologic terrains have superposed crater densities that span a factor of at least 500, thereby indicating huge differences in the ages of these terrains. It is possible that the high reflectivity of Enceladus' surface results from continuous deposition of icy particles from Saturn's E-ring, which in fact may originate from icy volcanoes on Enceladus' surface. Some terrains are dominated by sinuous mountain ridges from 1 to 2 km high (3300 to 6600 feet), whereas other terrains are scarred by linear cracks, some of which show evidence for possible sideways fault motion such as that of California's infamous San Andreas fault. Some terrains appear to have formed by separation of icy plates along cracks, and other terrains are exceedingly smooth at the resolution of this image. The implication carried by Enceladus' surface is that this tiny ice ball has been geologically active and perhaps partially liquid in its interior for much of its history. The heat engine that powers geologic activity here is thought to be elastic deformation caused by tides induced by Enceladus' orbital motion around Saturn and the motion of another moon, Dione.

  20. Mars Color Imager (MARCI) on the Mars Climate Orbiter

    USGS Publications Warehouse

    Malin, M.C.; Bell, J.F., III; Calvin, W.; Clancy, R.T.; Haberle, R.M.; James, P.B.; Lee, S.W.; Thomas, P.C.; Caplinger, M.A.

    2001-01-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (???6 x 6 x 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 1/3 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6?? FOV, which covers ???40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals. Copyright 2001 by the American

  1. Artificial frame filling using adaptive neural fuzzy inference system for particle image velocimetry dataset

    NASA Astrophysics Data System (ADS)

    Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer

    2015-03-01

    Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.

  2. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  3. Design of Pel Adaptive DPCM coding based upon image partition

    NASA Astrophysics Data System (ADS)

    Saitoh, T.; Harashima, H.; Miyakawa, H.

    1982-01-01

    A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).

  4. Real-time adaptive video image enhancement

    NASA Astrophysics Data System (ADS)

    Garside, John R.; Harrison, Chris G.

    1999-07-01

    As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.

  5. Iterative color constancy with temporal filtering for an image sequence with no relative motion between the camera and the scene.

    PubMed

    Simão, Josemar; Jörg Andreas Schneebeli, Hans; Vassallo, Raquel Frizera

    2015-11-01

    Color constancy is the ability to perceive the color of a surface as invariant even under changing illumination. In outdoor applications, such as mobile robot navigation or surveillance, the lack of this ability harms the segmentation, tracking, and object recognition tasks. The main approaches for color constancy are generally targeted to static images and intend to estimate the scene illuminant color from the images. We present an iterative color constancy method with temporal filtering applied to image sequences in which reference colors are estimated from previous corrected images. Furthermore, two strategies to sample colors from the images are tested. The proposed method has been tested using image sequences with no relative movement between the scene and the camera. It also has been compared with known color constancy algorithms such as gray-world, max-RGB, and gray-edge. In most cases, the iterative color constancy method achieved better results than the other approaches. PMID:26560917

  6. Analyzing visual enjoyment of color: using female nude digital Image as example

    NASA Astrophysics Data System (ADS)

    Chin, Sin-Ho

    2014-04-01

    This research adopts three primary colors and their three mixed colors as main color hue variances by changing the background of a female nude digital image. The color saturation variation is selected to 9S as high saturation and 3S as low saturation of PCCS. And the color tone elements are adopted in 3.5 as low brightness, 5.5 as medium brightness for primary color, and 7.5 as low brightness. The water-color brush stroke used for two female body digital images which consisting of a visual pleasant image with elegant posture and another unpleasant image with stiff body language, is to add the visual intimacy. Results show the brightness of color is the main factor impacting visual enjoyment, followed by saturation. Explicitly, high-brightness with high saturation gains the highest rate of enjoyment, high-saturation medium brightness (primary color) the second, and high-brightness with low saturation the third, and low-brightness with low saturation the least.

  7. Improving the image discontinuous problem by using color temperature mapping method

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming

    2011-09-01

    This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.

  8. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  9. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2005-01-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  10. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2004-12-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  11. Color filter array patterns for small-pixel image sensors with substantial cross talk.

    PubMed

    Anzagira, Leo; Fossum, Eric R

    2015-01-01

    Digital image sensor outputs usually must be transformed to suit the human visual system. This color correction amplifies noise, thus reducing the signal-to-noise ratio (SNR) of the image. In subdiffraction-limit (SDL) pixels, where optical and carrier cross talk can be substantial, this problem can become significant when conventional color filter arrays (CFAs) such as the Bayer patterns (RGB and CMY) are used. We present the design and analysis of new color filter array patterns for improving the color error and SNR deterioration caused by cross talk in these SDL pixels. We demonstrate an improvement in the color reproduction accuracy and SNR in high cross-talk conditions. Finally, we investigate the trade-off between color accuracy and SNR for the different CFA patterns. PMID:26366487

  12. Color images of Kansas subsurface geology from well logs

    USGS Publications Warehouse

    Collins, D.R.; Doveton, J.H.

    1986-01-01

    Modern wireline log combinations give highly diagnostic information that goes beyond the basic shale content, pore volume, and fluid saturation of older logs. Pattern recognition of geology from logs is made conventionally through either the examination of log overlays or log crossplots. Both methods can be combined through the use of color as a medium of information by setting the three color primaries of blue, green, and red light as axes of three dimensional color space. Multiple log readings of zones are rendered as composite color mixtures which, when plotted sequentially with depth, show lithological successions in a striking manner. The method is extremely simple to program and display on a color monitor. Illustrative examples are described from the Kansas subsurface. ?? 1986.

  13. Rapid production of structural color images with optical data storage capabilities

    NASA Astrophysics Data System (ADS)

    Rezaei, Mohamad; Jiang, Hao; Qarehbaghi, Reza; Naghshineh, Mohammad; Kaminska, Bozena

    2015-03-01

    In this paper, we present novel methods to produce structural color image for any given color picture using a pixelated generic stamp named nanosubstrate. The nanosubstrate is composed of prefabricated arrays of red, green and blue subpixels. Each subpixel has nano-gratings and/or sub-wavelength structures which give structural colors through light diffraction. Micro-patterning techniques were implemented to produce the color images from the nanosubstrate by selective activation of subpixels. The nano-grating structures can be nanohole arrays, which after replication are converted to nanopillar arrays or vice versa. It has been demonstrated that visible and invisible data can be easily stored using these fabrication methods and the information can be easily read. Therefore the techniques can be employed to produce personalized and customized color images for applications in optical document security and publicity, and can also be complemented by combined optical data storage capabilities.

  14. Adaptive predictive multiplicative autoregressive model for medical image compression.

    PubMed

    Chen, Z D; Chang, R F; Kuo, W J

    1999-02-01

    In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression. PMID:10232675

  15. Modeling human performance with low light sparse color imagers

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Reynolds, Joseph P.; Cha, Jae; Hodgkin, Van

    2011-05-01

    Reflective band sensors are often signal to noise limited in low light conditions. Any additional filtering to obtain spectral information further reduces the signal to noise, greatly affecting range performance. Modern sensors, such as the sparse color filter CCD, circumvent this additional degradation through reducing the number of pixels affected by filters and distributing the color information. As color sensors become more prevalent in the warfighter arsenal, the performance of the sensor-soldier system must be quantified. While field performance testing ultimately validates the success of a sensor, accurately modeling sensor performance greatly reduces the development time and cost, allowing the best technology to reach the soldier the fastest. Modeling of sensors requires accounting for how the signal is affected through the modulation transfer function (MTF) and noise of the system. For the modeling of these new sensors, the MTF and noise for each color band must be characterized, and the appropriate sampling and blur must be applied. We show how sparse array color filter sensors may be modeled and how a soldier's performance with such a sensor may be predicted. This general approach to modeling color sensors can be extended to incorporate all types of low light color sensors.

  16. True color blood flow imaging using a high-speed laser photography system

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi

    2012-10-01

    Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.

  17. Use of ultrasound, color Doppler imaging and radiography to monitor periapical healing after endodontic surgery.

    PubMed

    Tikku, Aseem P; Kumar, Sunil; Loomba, Kapil; Chandra, Anil; Verma, Promila; Aggarwal, Renu

    2010-09-01

    This study evaluated the effectiveness of ultrasound, color Doppler imaging and conventional radiography in monitoring the post-surgical healing of periapical lesions of endodontic origin. Fifteen patients who underwent periapical surgery for endodontic pathology were randomly selected. In all patients, periapical lesions were evaluated preoperatively using ultrasound, color Doppler imaging and conventional radiography, to analyze characteristics such as size, shape and dimensions. On radiographic evaluation, dimensions were measured in the superoinferior and mesiodistal direction using image-analysis software. Ultrasound evaluation was used to measure the changes in shape and dimensions on the anteroposterior, superoinferior, and mesiodistal planes. Color Doppler imaging was used to detect the blood-flow velocity. Postoperative healing was monitored in all patients at 1 week and 6 months by using ultrasound and color Doppler imaging, together with conventional radiography. The findings were then analyzed to evaluate the effectiveness of the 3 imaging techniques. At 6 months, ultrasound and color Doppler imaging were significantly better than conventional radiography in detecting changes in the healing of hard tissue at the surgical site (P < 0.004). This study demonstrates that ultrasound and color Doppler imaging have the potential to supplement conventional radiography in monitoring the post-surgical healing of periapical lesions of endodontic origin. PMID:20881334

  18. A color image quality assessment using a reduced-reference image machine learning expert

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Lebrun, Gilles; Lezoray, Olivier

    2008-01-01

    A quality metric based on a classification process is introduced. The main idea of the proposed method is to avoid the error pooling step of many factors (in frequential and spatial domain) commonly applied to obtain a final quality score. A classification process based on final quality class with respect to the standard quality scale provided by the UIT. Thus, for each degraded color image, a feature vector is computed including several Human Visual System characteristics, such as, contrast masking effect, color correlation, and so on. Selected features are of two kinds: 1) full-reference features and 2) no-reference characteristics. That way, a machine learning expert, providing a final class number is designed.

  19. Progress in color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2012-01-01

    We present an overview of our recent progress and the current state-of-the-art techniques of color image fusion for night vision applications. Inspired by previously developed color opponent fusing schemes, we initially developed a simple pixel-based false color-mapping scheme that yielded fused false color images with large color contrast and preserved the identity of the input signals. This method has been successfully deployed in different areas of research. However, since this color mapping did not produce realistic colors, we continued to develop a statistical color-mapping procedure that would transfer the color distribution of a given example image to a multiband nighttime image. This procedure yields a realistic color rendering. However, it is computationally expensive and achieves no color constancy since the mapping depends on the relative amounts of the different materials in the scene. By applying the statistical mapping approach in a color look-up-table framework, we finally achieved both color constancy and computational simplicity. This sample-based color transfer method is specific for different types of materials in a scene and can be easily adapted for the intended operating theatre and the task at hand. The method can be implemented as a look-up-table transform and is highly suitable for real-time implementations.

  20. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    PubMed Central

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A.; Wong, Alexander

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging. PMID:27346434

  1. Joint demosaicking and integer-ratio downsampling algorithm for color filter array image

    NASA Astrophysics Data System (ADS)

    Lee, Sangyoon; Kang, Moon Gi

    2015-03-01

    This paper presents a joint demosacking and integer-ratio downsampling algorithm for color filter array (CFA) images. Color demosaicking is a necessary part of image signal processing to obtain full color image for digital image recording system using single sensor. Also, such as mobile devices, the obtained image from sensor has to be downsampled to be display because the resolution of display is smaller than that of image. The conventional method is "Demosaicking first and downsampling later". However, this procedure requires a significant hardware resources and computational cost. In this paper, we proposed a method in which demosaicking and downsampling are working simultaneously. We analyze the Bayer CFA image in frequency domain, and then joint demosaicking and downsampling with integer-ratio scheme based on signal decomposition of luma and chrominance components. Experimental results show that the proposed method produces the high quality performance with much lower com putational cost and less hardware resources.

  2. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling.

    PubMed

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A; Wong, Alexander

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging. PMID:27346434

  3. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  4. Floating full-color image with computer-generated alcove rainbow hologram

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2014-02-01

    We have investigated the floating full color image display with the computer-generated hologram (CGH). The floating image, when utilized as a 3D display, gives strong impression to the viewer. In our previous study, to change the CGH shape from the flat type to the half cylindrical type, the floating image from the output CGH has the nearly 180 degrees viewing angle. However, since the previous CGH does not have wavelength-selectivity, reconstructed image only has a single color. Also, the huge calculation amount of the fringe pattern is big problem. Therefore, we now propose the rainbow-type computer generated alcove hologram. To decrease the calculation amount, the rainbow hologram sacrifices the vertical parallax. Also, this hologram can reconstruct an image with white light. Compared with the previous study of the Fresnel type, the calculation speed becomes 165 times faster. After calculation, we print this hologram with a fringe printer, and evaluate reconstructed floating full color images. In this study, we introduce the computer-generated rainbow hologram into the floating image display. The rainbow hologram can reconstruct full color image with white light illumination. It can be recorded by using a horizontal slit to limit the vertical parallax. Therefore, the slit changes into the half cylindrical slit, the wide viewing angle floating image display can reconstruct full color image.

  5. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  6. False-Color-Image Map of Quadrangle 3566, Sang-Charak (501) and Sayghan-O-Kamard (502) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  7. False-Color-Image Map of Quadrangle 3364, Pasa-Band (417) and Kejran (418) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  8. False-Color-Image Map of Quadrangle 3166, Jaldak (701) and Maruf-Nawa (702) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  9. False-Color-Image Map of Quadrangle 3462, Herat (409) and Chesht-Sharif (410) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  10. False-Color-Image Map of Quadrangle 3262, Farah (421) and Hokumat-E-Pur-Chaman (422) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  11. False-Color-Image Map of Quadrangle 3362, Shin-Dand (415) and Tulak (416) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  12. False-Color-Image Map of Quadrangle 3264, Nawzad-Musa-Qala (423) and Dehrawat (424) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  13. False-Color-Image Map of Quadrangle 3468, Chak Wardak-Syahgerd (509) and Kabul (510) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  14. False-Color-Image Map of Quadrangle 3570, Tagab-E-Munjan (505) and Asmar-Kamdesh (506) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  15. False-Color-Image Map of Quadrangle 3466, Lal-Sarjangal (507) and Bamyan (508) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  16. False-Color-Image Map of Quadrangle 3670, Jarm-Keshem (223) and Zebak (224) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  17. Empirical comparison of color normalization methods for epithelial-stromal classification in H and E images

    PubMed Central

    Sethi, Amit; Sha, Lingdao; Vahadane, Abhishek Ramnath; Deaton, Ryan J.; Kumar, Neeraj; Macias, Virgilia; Gann, Peter H.

    2016-01-01

    Context: Color normalization techniques for histology have not been empirically tested for their utility for computational pathology pipelines. Aims: We compared two contemporary techniques for achieving a common intermediate goal – epithelial-stromal classification. Settings and Design: Expert-annotated regions of epithelium and stroma were treated as ground truth for comparing classifiers on original and color-normalized images. Materials and Methods: Epithelial and stromal regions were annotated on thirty diverse-appearing H and E stained prostate cancer tissue microarray cores. Corresponding sets of thirty images each were generated using the two color normalization techniques. Color metrics were compared for original and color-normalized images. Separate epithelial-stromal classifiers were trained and compared on test images. Main analyses were conducted using a multiresolution segmentation (MRS) approach; comparative analyses using two other classification approaches (convolutional neural network [CNN], Wndchrm) were also performed. Statistical Analysis: For the main MRS method, which relied on classification of super-pixels, the number of variables used was reduced using backward elimination without compromising accuracy, and test - area under the curves (AUCs) were compared for original and normalized images. For CNN and Wndchrm, pixel classification test-AUCs were compared. Results: Khan method reduced color saturation while Vahadane reduced hue variance. Super-pixel-level test-AUC for MRS was 0.010–0.025 (95% confidence interval limits ± 0.004) higher for the two normalized image sets compared to the original in the 10–80 variable range. Improvement in pixel classification accuracy was also observed for CNN and Wndchrm for color-normalized images. Conclusions: Color normalization can give a small incremental benefit when a super-pixel-based classification method is used with features that perform implicit color normalization while the gain is

  18. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  19. Wide-field computational color imaging using pixel super-resolved on-chip microscopy

    PubMed Central

    Greenbaum, Alon; Feizi, Alborz; Akbari, Najva; Ozcan, Aydogan

    2013-01-01

    Lens-free holographic on-chip imaging is an emerging approach that offers both wide field-of-view (FOV) and high spatial resolution in a cost-effective and compact design using source shifting based pixel super-resolution. However, color imaging has remained relatively immature for lens-free on-chip imaging, since a ‘rainbow’ like color artifact appears in reconstructed holographic images. To provide a solution for pixel super-resolved color imaging on a chip, here we introduce and compare the performances of two computational methods based on (1) YUV color space averaging, and (2) Dijkstra’s shortest path, both of which eliminate color artifacts in reconstructed images, without compromising the spatial resolution or the wide FOV of lens-free on-chip microscopes. To demonstrate the potential of this lens-free color microscope we imaged stained Papanicolaou (Pap) smears over a wide FOV of ~14 mm2 with sub-micron spatial resolution. PMID:23736466

  20. HST Imaging of the Globular Clusters in the Formax Cluster: Color and Luminosity Distributions

    NASA Technical Reports Server (NTRS)

    Grillmair, C. J.; Forbes, D. A.; Brodie, J.; Elson, R.

    1998-01-01

    We examine the luminosity and B - I color distribution of globular clusters for three early-type galaxies in the Fornax cluster using imaging data from the Wide Field/Planetary Camera 2 on the Hubble Space Telescope.

  1. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2016-07-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  2. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the

  3. False color image of a portion of the Hammersley Mountains in Australia

    NASA Technical Reports Server (NTRS)

    1981-01-01

    False color image of a portion of the Hammersley Mountains in Western Australia was processed from data acquired by JPL's Shuttle Imaging Radar-A (SIR-A) when it flew aboard STS-2. Color processing of SIR-A data is used to separate variations in topography. Red areas represent very rough mountain terrain; pink is less rugged; yellow is textured; green is desert like territory, and blue represents smooth areas, like a dry lakebed. Finer details appear as thin lines.

  4. MRO Mars Color Imager (MARCI) Investigation Primary Mission Results

    NASA Astrophysics Data System (ADS)

    Edgett, K. S.; Cantor, B. A.; Malin, M. C.; Science; Operations Teams, M.

    2008-12-01

    The Mars Reconnaissance Orbiter (MRO) Mars Color Imager (MARCI) investigation was designed to recover the wide angle camera science objectives of the Mars Climate Orbiter MARCI which was destroyed upon arrival at Mars in 1999 and extend the daily meteorological coverage of the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle investigation that was systematically conducted from March 1999 to October 2006. MARCI consists of two wide angle cameras, each with a 180° field of view. The first acquires data in 5 visible wavelength channels (420, 550, 600, 650, 720 nm), the second in 2 UV channels (260, 320 nm). Data have been acquired daily, except during spacecraft upsets, since 24 September 2006. From the MRO 250 to 315 km altitude orbit, inclined 93 degrees, visible wavelength images usually have a pixel scale of about 1 km at nadir and the UV data are at about 8 km per pixel. Data are obtained during every orbit on the day side of the planet from terminator to terminator. These provide a nearly continuous record of meteorological events and changes in surface frost and albedo patterns that span more than 1 martian year and extend the daily global record of such events documented by the MGS MOC. For a few weeks in September and October 2006, both camera systems operated simultaneously, providing views of weather events at about 1400 local time (MOC) and an hour later at about 1500 (MARCI). The continuous meteorological record, now spanning more than 5 Mars years, shows very repeatable weather from year to year with cloud and dust-raising events occurring in the same regions within about 2 weeks of their prior occurrence in previous years. This provides a measure of predictability ideal for assessing future landing sites, orbiter aerobraking plans, and conditions to be encountered by the current landed spacecraft on Mars. However, less predictable are planet-encircling dust events. MOC observed one in 2001, the next was observed by MARCI in 2007. These

  5. Image edge detection based on adaptive lifting scheme

    NASA Astrophysics Data System (ADS)

    Xia, Ping; Xiang, Xuejun; Wan, Junli

    2009-10-01

    Image edge is because the gradation is the result of not continuously, is image's information basic characteristic, is also one of hot topics in image processing. This paper analyzes traditional arithmetic of image edge detection and existing problem, uses adaptive lifting wavelet analysis, adaptive adjusts the predict filter and the update filter according to information's partial characteristic, thus realizes the processing information accurate match; at the same time, improves the wavelet edge detection operator, realizes one kind to be suitable for the adaptive lifting scheme image edge detection's algorithm, and applies this method in the medicine image edge detection. The experiment results show that this paper's algorithm is better than the traditional algorithm effect.

  6. Color enhancement of highly correlated images. I - Decorrelation and HSI contrast stretches. [hue saturation intensity

    NASA Technical Reports Server (NTRS)

    Gillespie, Alan R.; Kahle, Anne B.; Walker, Richard E.

    1986-01-01

    Conventional enhancements for the color display of multispectral images are based on independent contrast modifications or 'stretches' of three input images. This approach is not effective if the image channels are highly correlated or if the image histograms are strongly bimodal or more complex. Any of several procedures that tend to 'stretch' color saturation while leaving hue unchanged may better utilize the full range of colors for the display of image information. Two conceptually different enhancements are discussed: the 'decorrelation stretch', based on principal-component (PC) analysis, and the 'stretch' of 'hue' - 'saturation' - intensity (HSI) transformed data. The PC transformation in scene-dependent, but the HSI transformation is invariant. Examples of images enhanced by conventional linear stretches, decorrelation stretch, and by stretches of HSI transformed data are compared. Schematic variation diagrams or two- and three-dimensional histograms are used to illustrate the 'decorrelation stretch' method and the effect of the different enhancements.

  7. Color filters including infrared cut-off integrated on CMOS image sensor.

    PubMed

    Frey, Laurent; Parrein, Pascale; Raby, Jacques; Pellé, Catherine; Hérault, Didier; Marty, Michel; Michailos, Jean

    2011-07-01

    A color image was taken with a CMOS image sensor without any infrared cut-off filter, using red, green and blue metal/dielectric filters arranged in Bayer pattern with 1.75 µm pixel pitch. The three colors were obtained by a thickness variation of only two layers in the 7-layer stack, with a technological process including four photolithography levels. The thickness of the filter stack was only half of the traditional color resists, potentially enabling a reduction of optical crosstalk for smaller pixels. Both color errors and signal to noise ratio derived from optimized spectral responses are expected to be similar to color resists associated with infrared filter. PMID:21747459

  8. Spatial distribution of jovian clouds, hazes and colors from Cassini ISS multi-spectral images

    NASA Astrophysics Data System (ADS)

    Ordonez-Etxeberria, I.; Hueso, R.; Sánchez-Lavega, A.; Pérez-Hoyos, S.

    2016-03-01

    The Cassini spacecraft made a gravity assist flyby of Jupiter in December 2000. The Imaging Science Subsystem (ISS) acquired images of the planet that covered the visual range with filters sensitive to the distribution of clouds and hazes, their altitudes and color. We use a selection of these images to build high-resolution cylindrical maps of the planet in 9 wavelengths. We explore the spatial distribution of the planet reflectivity examining the distribution of color and altitudes of hazes as well as their relation. A variety of analyses is presented: (a) Principal Component Analysis (PCA); (b) color-altitude indices; and (c) chromaticity diagrams (for a quantitative characterization of Jupiter "true" colors as they would be perceived by a human observer). PCA of the full dataset indicates that six components are required to explain the data. These components are likely related to the distribution of cloud opacity at the main cloud, the distribution of two types of hazes, two chromophores or coloring processes and the distribution of convective storms. While the distribution of a single chromophore can explain most of the color variations in the atmosphere, a second coloring agent is required to explain the brownish cyclones in the North Equatorial Belt (NEB). This second colorant could be caused by a different chromophore or by the same chromophore located in structures deeper in the atmosphere. Color indices separate different dynamical regions where cloud color and altitude are correlated from those where they are not. The Great Red Spot (GRS) appears as a well separated region in terms of its position in a global color-altitude scatter diagram and different families of vortices are examined, including the red cyclones which are located deeper in the atmosphere. Finally, a chromaticity diagram of Jupiter nearly true color images quantifies the color variations in Jupiter's clouds from the perspective of a visual observer and helps to quantify how different

  9. Content- and disparity-adaptive stereoscopic image retargeting

    NASA Astrophysics Data System (ADS)

    Yan, Weiqing; Hou, Chunping; Zhou, Yuan; Xiang, Wei

    2016-02-01

    The paper proposes a content- and disparity-adaptive stereoscopic image retargeting. To simultaneously avoid the saliency content and disparity distortion, firstly, we calculate the image saliency region distortion difference, and conclude the factors causing visual distortion. Then, the proposed method via a convex quadratic programming can simultaneously avoid the distortion of the salient region and adjust disparity to a target area, by considering the relationship of the scaling factor of salient region and the disparity scaling factor. The experimental results show that the proposed method is able to successfully adapt the image disparity to the target display screen, while the salient objects remain undistorted in the retargeted stereoscopic image.

  10. Dual-tree complex wavelet transform applied on color descriptors for remote-sensed images retrieval

    NASA Astrophysics Data System (ADS)

    Sebai, Houria; Kourgli, Assia; Serir, Amina

    2015-01-01

    This paper highlights color component features that improve high-resolution satellite (HRS) images retrieval. Color component correlation across image lines and columns is used to define a revised color space. It is designed to simultaneously take both color and neighborhood information. From this space, color descriptors, namely rotation invariant uniform local binary pattern, histogram of gradient, and a modified version of local variance are derived through dual-tree complex wavelet transform (DT-CWT). A new color descriptor called smoothed local variance (SLV) using an edge-preserving smoothing filter is introduced. It is intended to offer an efficient way to represent texture/structure information using an invariant to rotation descriptor. This descriptor takes advantage of DT-CWT representation to enhance the retrieval performance of HRS images. We report an evaluation of the SLV descriptor associated with the new color space using different similarity distances in our content-based image retrieval scheme. We also perform comparison with some standard features. Experimental results show that SLV descriptor allied to DT-CWT representation outperforms the other approaches.

  11. Seed viability detection using computerized false-color radiographic image enhancement

    NASA Technical Reports Server (NTRS)

    Vozzo, J. A.; Marko, Michael

    1994-01-01

    Seed radiographs are divided into density zones which are related to seed germination. The seeds which germinate have densities relating to false-color red. In turn, a seed sorter may be designed which rejects those seeds not having sufficient red to activate a gate along a moving belt containing the seed source. This results in separating only seeds with the preselected densities representing biological viability lending to germination. These selected seeds demand a higher market value. Actual false-coloring isn't required for a computer to distinguish the significant gray-zone range. This range can be predetermined and screened without the necessity of red imaging. Applying false-color enhancement is a means of emphasizing differences in densities of gray within any subject from photographic, radiographic, or video imaging. Within the 0-255 range of gray levels, colors can be assigned to any single level or group of gray levels. Densitometric values then become easily recognized colors which relate to the image density. Choosing a color to identify any given density allows separation by morphology or composition (form or function). Additionally, relative areas of each color are readily available for determining distribution of that density by comparison with other densities within the image.

  12. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  13. A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping

    NASA Astrophysics Data System (ADS)

    Saad, Ashraf A.; Shapiro, Linda G.

    2008-03-01

    Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.

  14. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  15. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    PubMed

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise. PMID:24663832

  16. Color-to-Grayscale: Does the Method Matter in Image Recognition?

    PubMed Central

    Kanan, Christopher; Cottrell, Garrison W.

    2012-01-01

    In image recognition it is often assumed the method used to convert color images to grayscale has little impact on recognition performance. We compare thirteen different grayscale algorithms with four types of image descriptors and demonstrate that this assumption is wrong: not all color-to-grayscale algorithms work equally well, even when using descriptors that are robust to changes in illumination. These methods are tested using a modern descriptor-based image recognition framework, on face, object, and texture datasets, with relatively few training instances. We identify a simple method that generally works best for face and object recognition, and two that work well for recognizing textures. PMID:22253768

  17. Coherent Image Layout using an Adaptive Visual Vocabulary

    SciTech Connect

    Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.; Gosink, Luke J.

    2013-03-06

    When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we are able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.

  18. Adaptive enhancement for infrared image using shearlet frame

    NASA Astrophysics Data System (ADS)

    Fan, Zunlin; Bi, Duyan; Gao, Shan; He, Linyuan; Ding, Wenshan

    2016-08-01

    An infrared imaging sensor is sensitive to the variation of imaging environment, which may affect the image quality and blur the edges in an infrared image. Therefore, it is necessary to enhance the infrared image. To improve the image contrast and adaptively enhance image structures, such as edges and details, this paper proposes a novel infrared image enhancement algorithm in the shearlet transform domain. To avoid over-enhancing strong edges and amplifying noise in plateau regions, we linearly enhance the details on the high frequency components based on their structure information, and improve the global image contrast by non-uniform illumination correction on the low frequency component. Then we convert the processed low and high components into the spatial domain to obtain the final enhanced image. Experimental results show that the proposed algorithm could enhance the infrared image details well and produce few noise regions, which is very helpful for target detection and recognition.

  19. Heritability of Chip Color and Specific Gravity in a Long-Day Adapted Solanum phureja-S. stenotomum Population

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Acceptable chip color and high specific gravity are important characteristics for chipping potatoes. High specific gravity in U.S. chipping varieties traces back to B5141-6 (‘Lenape’). In an effort to expand the germplasm base for high specific gravity, a long-day adapted diploid hybrid Solanum p...

  20. Quaternion higher-order spectra and their invariants for color image recognition

    NASA Astrophysics Data System (ADS)

    Jia, Xiaoning; Yang, Hang; Ma, Siliang; Song, Dongzhe

    2014-06-01

    This paper describes an invariants generation method for color images, which could be a useful tool in color object recognition tasks. First, by using the algebra of quaternions, we introduce the definition of quaternion higher-order spectra (QHOS) in the spatial domain and derive its equivalent form in the frequency domain. Then, QHOS invariants with respect to rotation, translation, and scaling transformations for color images are constructed using the central slice theorem and quaternion bispectral analysis. The feature data are further reduced to a smaller set using quaternion principal component analysis. The proposed method can deal with color images in a holistic manner, and the constructed QHOS invariants are highly immune to background noise. Experimental results show that the extracted QHOS invariants form compact and isolated clusters, and that a simple minimum distance classifier can yield high recognition accuracy.

  1. Unsupervised color image segmentation using graph cuts with multi-components

    NASA Astrophysics Data System (ADS)

    Li, Lei; Jin, Lianghai; Song, Enmin; Dong, Zhuoli

    2013-10-01

    A novel unsupervised color image segmentation method based on graph cuts with multi-components is proposed, which finds an optimal segmentation of an image by regarding it as an energy minimization problem. First, L*a*b* color space is chosen as color feature, and the multi-scale quaternion Gabor filter is employed to extract texture feature of the given image. Then, the segmentation is formulated in terms of energy minimization with an iterative process based on graph cuts, and the connected regions in each segment are considered as the components of the segment in each iteration. In addition, canny edge detector combined with color gradient is used to remove weak edges in segmentation results with the proposed algorithm. In contrast to previous algorithms, our method could greatly reduce computational complexity during inference procedure by graph cuts. Experimental results demonstrate the promising performance of the proposed method.

  2. Fresnel domain double-phase encoding encryption of color image via ptychography

    NASA Astrophysics Data System (ADS)

    Qiao, Liang; Wang, Yali; Li, Tuo; Shi, Yishi

    2015-10-01

    In this paper, color image encryption combined with ptychography has been investigated. Ptychographic imaging possesses a remarkable advantage of simple optics architecture and complex amplitude of object can be reconstructed just by a series of diffraction intensity patterns via aperture movement. Traditional technique of three primary color synthesis is applied for encrypting color image. In order to reduce physical limitations, the encryption's algorithm is based on Fresnel transformation domain. It is illustrated that the proposed optical encryption scheme has well ability to recover the encrypted color plaintext and advances in security enhancement thanks to introducing ptychography, since light probe as key factor enlarges the key space. Finally, the encryption's immunity to noise and reconstruction impact from lateral offset of probe has been investigated.

  3. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  4. A dual-modal retinal imaging system with adaptive optics

    PubMed Central

    Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

    2013-01-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

  5. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  6. Fusion framework for color image retrieval based on bag-of-words model and color local Haar binary patterns

    NASA Astrophysics Data System (ADS)

    Li, Li; Feng, Lin; Yu, Laihang; Wu, Jun; Liu, Shenglan

    2016-03-01

    Recently, global and local features have demonstrated excellent performance in image retrieval. However, there are some problems in both of them: (1) Local features particularly describe the local textures or patterns. However, similar textures may confuse these local feature extraction methods and get irrelevant retrieval results. (2) Global features delineate overall feature distributions in images, and the retrieved results often appear alike but may be irrelevant. To address problems above, we propose a fusion framework through the combination of local and global features, and thus obtain higher retrieval precision for color image retrieval. Color local Haar binary patterns (CLHBP) and the bag-of-words (BoW) of local features are exploited to capture global and local information of images. The proposed fusion framework combines the ranking results of BoW and CLHBP through a graph-based fusion method. The average retrieval precision of the proposed fusion framework is 83.6% on the Corel-1000 database, and its average precision is 9.9% and 6.4% higher than BoW and CLHBP, respectively. Extensive experiments on different databases validate the feasibility of the proposed framework.

  7. MMW and THz images denoising based on adaptive CBM3D

    NASA Astrophysics Data System (ADS)

    Dai, Li; Zhang, Yousai; Li, Yuanjiang; Wang, Haoxiang

    2014-04-01

    Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.

  8. Next generation high resolution adaptive optics fundus imager

    NASA Astrophysics Data System (ADS)

    Fournier, P.; Erry, G. R. G.; Otten, L. J.; Larichev, A.; Irochnikov, N.

    2005-12-01

    The spatial resolution of retinal images is limited by the presence of static and time-varying aberrations present within the eye. An updated High Resolution Adaptive Optics Fundus Imager (HRAOFI) has been built based on the development from the first prototype unit. This entirely new unit was designed and fabricated to increase opto-mechanical integration and ease-of-use through a new user interface. Improved camera systems for the Shack-Hartmann sensor and for the scene image were implemented to enhance the image quality and the frequency of the Adaptive Optics (AO) control loop. An optimized illumination system that uses specific wavelength bands was applied to increase the specificity of the images. Sample images of clinical trials of retinas, taken with and without the system, are shown. Data on the performance of this system will be presented, demonstrating the ability to calculate near diffraction-limited images.

  9. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-06-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  10. Superresolution restoration of an image sequence: adaptive filtering approach.

    PubMed

    Elad, M; Feuer, A

    1999-01-01

    This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented. PMID:18262881

  11. Adaptive filtering image preprocessing for smart FPA technology

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1995-05-01

    This paper discusses two applications of adaptive filters for image processing on parallel architectures. The first, based on the results of previously accomplished work, summarizes the analyses of various adaptive filters implemented for pixel-level image prediction. FIR filters, fixed and adaptive IIR filters, and various variable step size algorithms were compared with a focus on algorithm complexity against the ability to predict future pixel values. A gaussian smoothing operation with varying spatial and temporal constants were also applied for comparisons of random noise reductions. The second application is a suggestion to use memory-adaptive IIR filters for detecting and tracking motion within an image. Objects within an image are made of edges, or segments, with varying degrees of motion. An application has been previously published that describes FIR filters connecting pixels and using correlations to determine motion and direction. This implementation seems limited to detecting motion coinciding with FIR filter operation rate and the associated harmonics. Upgrading the FIR structures with adaptive IIR structures can eliminate these limitations. These and any other pixel-level adaptive filtering application require data memory for filter parameters and some basic computational capability. Tradeoffs have to be made between chip real estate and these desired features. System tradeoffs will also have to be made as to where it makes the most sense to do which level of processing. Although smart pixels may not be ready to implement adaptive filters, applications such as these should give the smart pixel designer some long range goals.

  12. Bayer patterned high dynamic range image reconstruction using adaptive weighting function

    NASA Astrophysics Data System (ADS)

    Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi

    2014-12-01

    It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

  13. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  14. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging

    PubMed Central

    Cua, Michelle; Wahl, Daniel J.; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J.; Jian, Yifan; Sarunic, Marinko V.

    2016-01-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635

  15. Coherence-Gated Sensorless Adaptive Optics Multiphoton Retinal Imaging.

    PubMed

    Cua, Michelle; Wahl, Daniel J; Zhao, Yuan; Lee, Sujin; Bonora, Stefano; Zawadzki, Robert J; Jian, Yifan; Sarunic, Marinko V

    2016-01-01

    Multiphoton microscopy enables imaging deep into scattering tissues. The efficient generation of non-linear optical effects is related to both the pulse duration (typically on the order of femtoseconds) and the size of the focused spot. Aberrations introduced by refractive index inhomogeneity in the sample distort the wavefront and enlarge the focal spot, which reduces the multiphoton signal. Traditional approaches to adaptive optics wavefront correction are not effective in thick or multi-layered scattering media. In this report, we present sensorless adaptive optics (SAO) using low-coherence interferometric detection of the excitation light for depth-resolved aberration correction of two-photon excited fluorescence (TPEF) in biological tissue. We demonstrate coherence-gated SAO TPEF using a transmissive multi-actuator adaptive lens for in vivo imaging in a mouse retina. This configuration has significant potential for reducing the laser power required for adaptive optics multiphoton imaging, and for facilitating integration with existing systems. PMID:27599635

  16. Calibration View of Earth and the Moon by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The Earth and Moon images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to the Moon was about 1,440,000 kilometers (about 895,000 miles); the range to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This view combines a sequence of frames showing the passage of Earth and the Moon across the field of view of a single color band of the Mars Color Imager. As the spacecraft slewed to view the two objects, they passed through the camera's field of view. Earth has been saturated white in this image so that both Earth

  17. Face illumination manipulation using a single reference image by adaptive layer decomposition.

    PubMed

    Chen, Xiaowu; Wu, Hongyu; Jin, Xin; Zhao, Qinping

    2013-11-01

    This paper proposes a novel image-based framework to manipulate the illumination of human face through adaptive layer decomposition. According to our framework, only a single reference image, without any knowledge of the 3D geometry or material information of the input face, is needed. To transfer the illumination effects of a reference face image to a normal lighting face, we first decompose the lightness layers of the reference and the input images into large-scale and detail layers through weighted least squares (WLS) filter with adaptive smoothing parameters according to the gradient values of the face images. The large-scale layer of the reference image is filtered with the guidance of the input image by guided filter with adaptive smoothing parameters according to the face structures. The relit result is obtained by replacing the largescale layer of the input image with that of the reference image. To normalize the illumination effects of a non-normal lighting face (i.e., face delighting), we introduce similar reflectance prior to the layer decomposition stage by WLS filter, which make the normalized result less affected by the high contrast light and shadow effects of the input face. Through these two procedures, we can change the illumination effects of a non-normal lighting face by first normalizing the illumination and then transferring the illumination of another reference face to it. We acquire convincing relit results of both face relighting and delighting on numerous input and reference face images with various illumination effects and genders. Comparisons with previous papers show that our framework is less affected by geometry differences and can preserve better the identification structure and skin color of the input face. PMID:23807447

  18. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  19. Comparative color space analysis of difference images from adjacent visible human slices for lossless compression

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Pipkin, Ryan; Mitra, Sunanda

    1997-10-01

    This paper reports the compression ratio performance of the RGB, YIQ, and HSV color plane models for the lossless coding of the National Library of Medicine's Visible Human (VH) color data set. In a previous study the correlation between adjacent VH slices was exploited using the RGB color plane model. The results of that study suggested an investigation into possible improvements using the other two color planes, and alternative differencing methods. YIQ and HSV, also know a HSI, both represent the image by separating the intensity from the color information, and we anticipated higher correlation between the intensity components of adjacent VH slices. However the compression ratio did not improve by the transformation from RGB into the other color plane models, since in order to maintain lossless performance, YIQ and HSV both require more bits to store each pixel. This increase in file size is not offset by the increase in compression due to the higher correlation of the intensity value, the best performance being achieved with the RGB color plane model. This study also explored three methods of differencing: average reference image, alternating reference image, and cascaded difference from single reference. The best method proved to be the first iteration of the cascaded difference from single reference. In this method, a single reference image is chosen, and the difference between it and its neighbor is calculated. Then the difference between the neighbor and its next neighbor is calculated. This method requires that all preceding images up to the reference image be reconstructed before the target image is available. The compression ratios obtained from this method are significantly better than the competing methods.

  20. Improving the visualization and detection of tissue folds in whole slide images through color enhancement

    PubMed Central

    Bautista, Pinky A.; Yagi, Yukako

    2010-01-01

    Objective: The objective of this paper is to improve the visualization and detection of tissue folds, which are prominent among tissue slides, from the pre-scan image of a whole slide image by introducing a color enhancement method that enables the differentiation between fold and non-fold image pixels. Method: The weighted difference between the color saturation and luminance of the image pixels is used as shifting factor to the original RGB color of the image. Results: Application of the enhancement method to hematoxylin and eosin (H&E) stained images improves the visualization of tissue folds regardless of the colorimetric variations in the images. Detection of tissue folds after application of the enhancement also improves but the presence of nuclei, which are also stained dark like the folds, was found to sometimes affect the detection accuracy. Conclusion: The presence of tissue artifacts could affect the quality of whole slide images, especially that whole slide scanners select the focus points from the pre-scan image wherein the artifacts are indistinguishable from real tissue area. We have a presented in this paper an enhancement scheme that improves the visualization and detection of tissue folds from pre-scan images. Since the method works on the simulated pre-scan images its integration to the actual whole slide imaging process should also be possible. PMID:21221170

  1. DTV color and image processing: past, present, and future

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

    2006-01-01

    The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

  2. Quantifying the Onset and Progression of Plant Senescence by Color Image Analysis for High Throughput Applications.

    PubMed

    Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J

    2016-01-01

    Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant's response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807

  3. Quantifying the Onset and Progression of Plant Senescence by Colo