Multimodal digital color imaging system for facial skin lesion analysis
NASA Astrophysics Data System (ADS)
Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo
2008-02-01
In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.
Kruse, Fred A.
1984-01-01
Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.
An instructional guide for leaf color analysis using digital imaging software
Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg
2005-01-01
Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...
NASA Astrophysics Data System (ADS)
Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko
2011-03-01
Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
Quantitative Assay for Starch by Colorimetry Using a Desktop Scanner
ERIC Educational Resources Information Center
Matthews, Kurt R.; Landmark, James D.; Stickle, Douglas F.
2004-01-01
The procedure to produce standard curve for starch concentration measurement by image analysis using a color scanner and computer for data acquisition and color analysis is described. Color analysis is performed by a Visual Basic program that measures red, green, and blue (RGB) color intensities for pixels within the scanner image.
Color model comparative analysis for breast cancer diagnosis using H and E stained images
NASA Astrophysics Data System (ADS)
Li, Xingyu; Plataniotis, Konstantinos N.
2015-03-01
Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.
Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat
2015-06-01
Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.
An investigation on the intra-sample distribution of cotton color by using image analysis
USDA-ARS?s Scientific Manuscript database
The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...
Paul R. Sheppard; Alex Wiedenhoeft
2007-01-01
This paper describes the removal of extraneous color from increment cores of conifers prior to reflected-light image analysis of tree rings. Ponderosa pine in central New Mexico was chosen for study. Peroxide bleaching was used as a pretreatment to remove extraneous color and still yield usable wood for image analysis. The cores were bleached in 3% peroxide raised to...
Quantitative characterization of color Doppler images: reproducibility, accuracy, and limitations.
Delorme, S; Weisser, G; Zuna, I; Fein, M; Lorenz, A; van Kaick, G
1995-01-01
A computer-based quantitative analysis for color Doppler images of complex vascular formations is presented. The red-green-blue-signal from an Acuson XP10 is frame-grabbed and digitized. By matching each image pixel with the color bar, color pixels are identified and assigned to the corresponding flow velocity (color value). Data analysis consists of delineation of a region of interest and calculation of the relative number of color pixels in this region (color pixel density) as well as the mean color value. The mean color value was compared to flow velocities in a flow phantom. The thyroid and carotid artery in a volunteer were repeatedly examined by a single examiner to assess intra-observer variability. The thyroids in five healthy controls were examined by three experienced physicians to assess the extent of inter-observer variability and observer bias. The correlation between the mean color value and flow velocity ranged from 0.94 to 0.96 for a range of velocities determined by pulse repetition frequency. The average deviation of the mean color value from the flow velocity was 22% to 41%, depending on the selected pulse repetition frequency (range of deviations, -46% to +66%). Flow velocity was underestimated with inadequately low pulse repetition frequency, or inadequately high reject threshold. An overestimation occurred with inadequately high pulse repetition frequency. The highest intra-observer variability was 22% (relative standard deviation) for the color pixel density, and 9.1% for the mean color value. The inter-observer variation was approximately 30% for the color pixel density, and 20% for the mean color value. In conclusion, computer assisted image analysis permits an objective description of color Doppler images. However, the user must be aware that image acquisition under in vivo conditions as well as physical and instrumental factors may considerably influence the results.
True Color Image Analysis For Determination Of Bone Growth In Fluorochromic Biopsies
NASA Astrophysics Data System (ADS)
Madachy, Raymond J.; Chotivichit, Lee; Huang, H. K.; Johnson, Eric E.
1989-05-01
A true color imaging technique has been developed for analysis of microscopic fluorochromic bone biopsy images to quantify new bone growth. The technique searches for specified colors in a medical image for quantification of areas of interest. Based on a user supplied training set, a multispectral classification of pixel values is performed and used for segmenting the image. Good results were obtained when compared to manual tracings of new bone growth performed by an orthopedic surgeon. At a 95% confidence level, the hypothesis that there is no difference between the two methods can be accepted. Work is in progress to test bone biopsies with different colored stains and further optimize the analysis process using three-dimensional spectral ordering techniques.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Yoon, Woong Bae; Kim, Hyunjin; Kim, Kwang Gi; Choi, Yongdoo; Chang, Hee Jin
2016-01-01
Objectives We produced hematoxylin and eosin (H&E) staining-like color images by using confocal laser scanning microscopy (CLSM), which can obtain the same or more information in comparison to conventional tissue staining. Methods We improved images by using several image converting techniques, including morphological methods, color space conversion methods, and segmentation methods. Results An image obtained after image processing showed coloring very similar to that in images produced by H&E staining, and it is advantageous to conduct analysis through fluorescent dye imaging and microscopy rather than analysis based on single microscopic imaging. Conclusions The colors used in CLSM are different from those seen in H&E staining, which is the method most widely used for pathologic diagnosis and is familiar to pathologists. Computer technology can facilitate the conversion of images by CLSM to be very similar to H&E staining images. We believe that the technique used in this study has great potential for application in clinical tissue analysis. PMID:27525165
Yoon, Woong Bae; Kim, Hyunjin; Kim, Kwang Gi; Choi, Yongdoo; Chang, Hee Jin; Sohn, Dae Kyung
2016-07-01
We produced hematoxylin and eosin (H&E) staining-like color images by using confocal laser scanning microscopy (CLSM), which can obtain the same or more information in comparison to conventional tissue staining. We improved images by using several image converting techniques, including morphological methods, color space conversion methods, and segmentation methods. An image obtained after image processing showed coloring very similar to that in images produced by H&E staining, and it is advantageous to conduct analysis through fluorescent dye imaging and microscopy rather than analysis based on single microscopic imaging. The colors used in CLSM are different from those seen in H&E staining, which is the method most widely used for pathologic diagnosis and is familiar to pathologists. Computer technology can facilitate the conversion of images by CLSM to be very similar to H&E staining images. We believe that the technique used in this study has great potential for application in clinical tissue analysis.
Visual wetness perception based on image color statistics.
Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya
2017-05-01
Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.
Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J.
2016-01-01
Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant’s response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Fomins, Sergejs
2010-11-01
Multispectral color analysis was used for spectral scanning of Ishihara and Rabkin color deficiency test book images. It was done using tunable liquid-crystal LC filters built in the Nuance II analyzer. Multispectral analysis keeps both, information on spatial content of tests and on spectral content. Images were taken in the range of 420-720nm with a 10nm step. We calculated retina neural activity charts taking into account cone sensitivity functions, and processed charts in order to find the visibility of latent symbols in color deficiency plates using cross-correlation technique. In such way the quantitative measure is found for each of diagnostics plate for three different color deficiency carrier types - protanopes, deutanopes and tritanopes. Multispectral color analysis allows to determine the CIE xyz color coordinates of pseudoisochromatic plate design elements and to perform statistical analysis of these data to compare the color quality of available color deficiency test books.
Pet fur color and texture classification
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Mukherjee, Debarghar; Lim, SukHwan; Tretter, Daniel
2007-01-01
Object segmentation is important in image analysis for imaging tasks such as image rendering and image retrieval. Pet owners have been known to be quite vocal about how important it is to render their pets perfectly. We present here an algorithm for pet (mammal) fur color classification and an algorithm for pet (animal) fur texture classification. Per fur color classification can be applied as a necessary condition for identifying the regions in an image that may contain pets much like the skin tone classification for human flesh detection. As a result of the evolution, fur coloration of all mammals is caused by a natural organic pigment called Melanin and Melanin has only very limited color ranges. We have conducted a statistical analysis and concluded that mammal fur colors can be only in levels of gray or in two colors after the proper color quantization. This pet fur color classification algorithm has been applied for peteye detection. We also present here an algorithm for animal fur texture classification using the recently developed multi-resolution directional sub-band Contourlet transform. The experimental results are very promising as these transforms can identify regions of an image that may contain fur of mammals, scale of reptiles and feather of birds, etc. Combining the color and texture classification, one can have a set of strong classifiers for identifying possible animals in an image.
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
Contrast enhancement of bite mark images using the grayscale mixer in ACR in Photoshop®.
Evans, Sam; Noorbhai, Suzanne; Lawson, Zoe; Stacey-Jones, Seren; Carabott, Romina
2013-05-01
Enhanced images may improve bite mark edge definition, assisting forensic analysis. Current contrast enhancement involves color extraction, viewing layered images by channel. A novel technique, producing a single enhanced image using the grayscale mix panel within Adobe Camera Raw®, has been developed and assessed here, allowing adjustments of multiple color channels simultaneously. Stage 1 measured RGB values in 72 versions of a color chart image; eight sliders in Photoshop® were adjusted at 25% intervals, all corresponding colors affected. Stage 2 used a bite mark image, and found only red, orange, and yellow sliders had discernable effects. Stage 3 assessed modality preference between color, grayscale, and enhanced images; on average, the 22 survey participants chose the enhanced image as better defined for nine out of 10 bite marks. The study has shown potential benefits for this new technique. However, further research is needed before use in the analysis of bite marks. © 2013 American Academy of Forensic Sciences.
Kaur, Ravneet; Albano, Peter P.; Cole, Justin G.; Hagerty, Jason; LeAnder, Robert W.; Moss, Randy H.; Stoecker, William V.
2015-01-01
Background/Purpose Early detection of malignant melanoma is an important public health challenge. In the USA, dermatologists are seeing more melanomas at an early stage, before classic melanoma features have become apparent. Pink color is a feature of these early melanomas. If rapid and accurate automatic detection of pink color in these melanomas could be accomplished, there could be significant public health benefits. Methods Detection of three shades of pink (light pink, dark pink, and orange pink) was accomplished using color analysis techniques in five color planes (red, green, blue, hue and saturation). Color shade analysis was performed using a logistic regression model trained with an image set of 60 dermoscopic images of melanoma that contained pink areas. Detected pink shade areas were further analyzed with regard to the location within the lesion, average color parameters over the detected areas, and histogram texture features. Results Logistic regression analysis of a separate set of 128 melanomas and 128 benign images resulted in up to 87.9% accuracy in discriminating melanoma from benign lesions measured using area under the receiver operating characteristic curve. The accuracy in this model decreased when parameters for individual shades, texture, or shade location within the lesion were omitted. Conclusion Texture, color, and lesion location analysis applied to multiple shades of pink can assist in melanoma detection. When any of these three details: color location, shade analysis, or texture analysis were omitted from the model, accuracy in separating melanoma from benign lesions was lowered. Separation of colors into shades and further details that enhance the characterization of these color shades are needed for optimal discrimination of melanoma from benign lesions. PMID:25809473
Kaur, R; Albano, P P; Cole, J G; Hagerty, J; LeAnder, R W; Moss, R H; Stoecker, W V
2015-11-01
Early detection of malignant melanoma is an important public health challenge. In the USA, dermatologists are seeing more melanomas at an early stage, before classic melanoma features have become apparent. Pink color is a feature of these early melanomas. If rapid and accurate automatic detection of pink color in these melanomas could be accomplished, there could be significant public health benefits. Detection of three shades of pink (light pink, dark pink, and orange pink) was accomplished using color analysis techniques in five color planes (red, green, blue, hue, and saturation). Color shade analysis was performed using a logistic regression model trained with an image set of 60 dermoscopic images of melanoma that contained pink areas. Detected pink shade areas were further analyzed with regard to the location within the lesion, average color parameters over the detected areas, and histogram texture features. Logistic regression analysis of a separate set of 128 melanomas and 128 benign images resulted in up to 87.9% accuracy in discriminating melanoma from benign lesions measured using area under the receiver operating characteristic curve. The accuracy in this model decreased when parameters for individual shades, texture, or shade location within the lesion were omitted. Texture, color, and lesion location analysis applied to multiple shades of pink can assist in melanoma detection. When any of these three details: color location, shade analysis, or texture analysis were omitted from the model, accuracy in separating melanoma from benign lesions was lowered. Separation of colors into shades and further details that enhance the characterization of these color shades are needed for optimal discrimination of melanoma from benign lesions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders.
Tapia-McClung, Horacio; Ajuria Ibarra, Helena; Rao, Dinesh
2016-01-01
Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology.
Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders
Ajuria Ibarra, Helena; Rao, Dinesh
2016-01-01
Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology. PMID:27902724
Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji
2015-05-01
Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Quantifying nonhomogeneous colors in agricultural materials part I: method development.
Balaban, M O
2008-11-01
Measuring the color of food and agricultural materials using machine vision (MV) has advantages not available by other measurement methods such as subjective tests or use of color meters. The perception of consumers may be affected by the nonuniformity of colors. For relatively uniform colors, average color values similar to those given by color meters can be obtained by MV. For nonuniform colors, various image analysis methods (color blocks, contours, and "color change index"[CCI]) can be applied to images obtained by MV. The degree of nonuniformity can be quantified, depending on the level of detail desired. In this article, the development of the CCI concept is presented. For images with a wide range of hue values, the color blocks method quantifies well the nonhomogeneity of colors. For images with a narrow hue range, the CCI method is a better indicator of color nonhomogeneity.
Illuminant color estimation based on pigmentation separation from human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2015-03-01
Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
Kamei, Ryotaro; Watanabe, Yuji; Sagiyama, Koji; Isoda, Takuro; Togao, Osamu; Honda, Hiroshi
2018-05-23
To investigate the optimal monochromatic color combination for fusion imaging of FDG-PET and diffusion-weighted MR images (DW) regarding lesion conspicuity of each image. Six linear monochromatic color-maps of red, blue, green, cyan, magenta, and yellow were assigned to each of the FDG-PET and DW images. Total perceptual color differences of the lesions were calculated based on the lightness and chromaticity measured with the photometer. Visual lesion conspicuity was also compared among the PET-only, DW-only and PET-DW-double positive portions with mean conspicuity scores. Statistical analysis was performed with a one-way analysis of variance and Spearman's rank correlation coefficient. Among all the 12 possible monochromatic color-map combinations, the 3 combinations of red/cyan, magenta/green, and red/green produced the highest conspicuity scores. Total color differences between PET-positive and double-positive portions correlated with conspicuity scores (ρ = 0.2933, p < 0.005). Lightness differences showed a significant negative correlation with conspicuity scores between the PET-only and DWI-only positive portions. Chromaticity differences showed a marginally significant correlation with conspicuity scores between DWI-positive and double-positive portions. Monochromatic color combinations can facilitate the visual evaluation of FDG-uptake and diffusivity as well as registration accuracy on the FDG-PET/DW fusion images, when red- and green-colored elements are assigned to FDG-PET and DW images, respectively.
An improved K-means clustering algorithm in agricultural image segmentation
NASA Astrophysics Data System (ADS)
Cheng, Huifeng; Peng, Hui; Liu, Shanmei
Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.
High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.
Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi
2010-12-15
A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.
Study on color difference estimation method of medicine biochemical analysis
NASA Astrophysics Data System (ADS)
Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun
2006-01-01
The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.
Iizaka, Shinji; Sugama, Junko; Nakagami, Gojiro; Kaitani, Toshiko; Naito, Ayumi; Koyanagi, Hiroe; Matsuo, Junko; Kadono, Takafumi; Konya, Chizuko; Sanada, Hiromi
2011-01-01
Granulation tissue color is one indicator for pressure ulcer (PU) assessment. However, it entails a subjective evaluation only, and quantitative methods have not been established. We developed color indicators from digital image analysis and investigated their concurrent validity and reliability for clinical PUs. A cross-sectional study was conducted on 47 patients with 55 full-thickness PUs. After color calibration, a wound photograph was converted into three images representing red color: erythema index (EI), modified erythema index with additional color calibration (granulation red index [GRI]), and , which represents the artificially created red-green axis of L(*) a(*) b(*) color space. The mean intensity of the granulation tissue region and the percentage of pixels exceeding the optimal cutoff intensity (% intensity) were calculated. Mean GRI (ρ=0.39, p=0.007) and (ρ=0.55, p<0.001), as well as their % intensity indicators, showed positive correlations with a(*) measured by tristimulus colorimeter, but erythema index did not. They were correlated with hydroxyproline concentration in wound fluid, healthy granulation tissue area, and blood hemoglobin level. Intra- and interrater reliability of the indicator calculation using both GRI and had an intraclass correlation coefficient >0.9. GRI and from digital image analysis can quantitatively evaluate granulation tissue color of clinical PUs. © 2011 by the Wound Healing Society.
Research on image complexity evaluation method based on color information
NASA Astrophysics Data System (ADS)
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
The Role of Color and Morphologic Characteristics in Dermoscopic Diagnosis.
Bajaj, Shirin; Marchetti, Michael A; Navarrete-Dechent, Cristian; Dusza, Stephen W; Kose, Kivanc; Marghoob, Ashfaq A
2016-06-01
Both colors and structures are considered important in the dermoscopic evaluation of skin lesions but their relative significance is unknown. To determine if diagnostic accuracy for common skin lesions differs between gray-scale and color dermoscopic images. A convenience sample of 40 skin lesions (8 nevi, 8 seborrheic keratoses, 7 basal cell carcinomas, 7 melanomas, 4 hemangiomas, 4 dermatofibromas, 2 squamous cell carcinomas [SCCs]) was selected and shown to attendees of a dermoscopy course (2014 Memorial Sloan Kettering Cancer Center dermoscopy course). Twenty lesions were shown only once, either in gray-scale (n = 10) or color (n = 10) (nonpaired). Twenty lesions were shown twice, once in gray-scale (n = 20) and once in color (n = 20) (paired). Participants provided their diagnosis and confidence level for each of the 60 images. Of the 261 attendees, 158 participated (60.5%) in the study. Most were attending physicians (n = 76 [48.1%]). Most participants were practicing or training in dermatology (n = 144 [91.1%]). The median (interquartile range) experience evaluating skin lesions and using dermoscopy of participants was 6 (13.5) and 2 (4.0) years, respectively. Diagnostic accuracy and confidence level of participants evaluating gray-scale and color images. Two separate analyses were performed: (1) an unpaired evaluation comparing gray-scale and color images shown either once or for the first time, and (2) a paired evaluation comparing pairs of gray-scale and color images of the same lesion. In univariate analysis of unpaired images, color images were less likely to be diagnosed correctly compared with gray-scale images (odds ratio [OR], 0.8; P < .001). Using gray-scale images as the reference, multivariate analyses of both unpaired and paired images found no association between correct lesion diagnosis and use of color images (OR, 1.0; P = .99, and OR, 1.2; P = .82, respectively). Stratified analysis of paired images using a color by diagnosis interaction term showed that participants were more likely to make a correct diagnosis of SCC and hemangioma in color (P < .001 for both comparisons) and dermatofibroma in gray-scale (P < .001). Morphologic characteristics (ie, structures and patterns), not color, provide the primary diagnostic clue in dermoscopy. Use of gray-scale images may improve teaching of dermoscopy to novices by emphasizing the evaluation of morphology.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
NPS assessment of color medical image displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired
Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T
2011-05-31
Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision.
Citrus fruit recognition using color image analysis
NASA Astrophysics Data System (ADS)
Xu, Huirong; Ying, Yibin
2004-10-01
An algorithm for the automatic recognition of citrus fruit on the tree was developed. Citrus fruits have different color with leaves and branches portions. Fifty-three color images with natural citrus-grove scenes were digitized and analyzed for red, green, and blue (RGB) color content. The color characteristics of target surfaces (fruits, leaves, or branches) were extracted using the range of interest (ROI) tool. Several types of contrast color indices were designed and tested. In this study, the fruit image was enhanced using the (R-B) contrast color index because results show that the fruit have the highest color difference among the objects in the image. A dynamic threshold function was derived from this color model and used to distinguish citrus fruit from background. The results show that the algorithm worked well under frontlighting or backlighting condition. However, there are misclassifications when the fruit or the background is under a brighter sunlight.
Temporal changes in tongue color as criterion for tongue diagnosis in Kampo medicine.
Yamamoto, Satoshi; Ishikawa, Yuya; Nakaguchi, Toshiya; Ogawa-Ochiai, Keiko; Tsumura, Norimichi; Kasahara, Yuji; Namiki, Takao; Miyake, Yoichi
2012-01-01
In Kampo medicine (Japanese traditional herbal medicine), the appearance of the tongue contains a lot of useful information for diagnosis. However, an inspection of the tongue is not considered to be important in modern medical diagnosis, since the skills applied in the examination are difficult to understand. Thus, we developed an imaging system and algorithm for quantitative analysis of the tongue to provide the traditional techniques of Kampo with greater objectivity. Tongue images were taken from 9 healthy subjects for 3 consecutive weeks (5 days/week), 12 times a day, with 300 images taken successively within 30 s each time. Then, the temporal color changes in 30 s, 1 day, and 3 weeks were measured in the device-independent International Commission on Illumination (CIE) 1976 L*a*b* color space. The tongue color change in 30 s varied between individuals, and it was mainly classified into 3 patterns. This image acquisition system and valid color management should help all tongue-related research, and the 30-s temporal color change might be an important target for further tongue analysis. We were able to acquire tongue images without specular reflection and with valid color reproduction, and the color change in 30 s was found to vary. Tongue color changes have not been mentioned in the classics of Kampo medicine, since they were certainly impossible to discriminate by the naked eye. The change during 30 s is a new finding based on the electronic devices, and together they are expected to become a new criterion for tongue analysis. Copyright © 2012 S. Karger AG, Basel.
Procurement specification color graphic camera system
NASA Technical Reports Server (NTRS)
Prow, G. E.
1980-01-01
The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.
Quantitative image analysis of immunohistochemical stains using a CMYK color model
Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W
2007-01-01
Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824
CMOS image sensors as an efficient platform for glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo; Choi, Cheol Soo
2013-10-07
Complementary metal oxide semiconductor (CMOS) image sensors have been used previously in the analysis of biological samples. In the present study, a CMOS image sensor was used to monitor the concentration of oxidized mouse plasma glucose (86-322 mg dL(-1)) based on photon count variation. Measurement of the concentration of oxidized glucose was dependent on changes in color intensity; color intensity increased with increasing glucose concentration. The high color density of glucose highly prevented photons from passing through the polydimethylsiloxane (PDMS) chip, which suggests that the photon count was altered by color intensity. Photons were detected by a photodiode in the CMOS image sensor and converted to digital numbers by an analog to digital converter (ADC). Additionally, UV-spectral analysis and time-dependent photon analysis proved the efficiency of the detection system. This simple, effective, and consistent method for glucose measurement shows that CMOS image sensors are efficient devices for monitoring glucose in point-of-care applications.
Real-time color image processing for forensic fiber investigations
NASA Astrophysics Data System (ADS)
Paulsson, Nils
1995-09-01
This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
NPS assessment of color medical displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-02-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.
Single underwater image enhancement based on color cast removal and visibility restoration
NASA Astrophysics Data System (ADS)
Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian
2016-05-01
Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.
Unsupervised color normalisation for H and E stained histopathology image analysis
NASA Astrophysics Data System (ADS)
Celis, Raúl; Romero, Eduardo
2015-12-01
In histology, each dye component attempts to specifically characterise different microscopic structures. In the case of the Hematoxylin-Eosin (H&E) stain, universally used for routine examination, quantitative analysis may often require the inspection of different morphological signatures related mainly to nuclei patterns, but also to stroma distribution. Nevertheless, computer systems for automatic diagnosis are often fraught by color variations ranging from the capturing device to the laboratory specific staining protocol and stains. This paper presents a novel colour normalisation method for H&E stained histopathology images. This method is based upon the opponent process theory and blindly estimates the best color basis for the Hematoxylin and Eosin stains without relying on prior knowledge. Stain Normalisation and Color Separation are transversal to any Framework of Histopathology Image Analysis.
Tongue's substance and coating recognition analysis using HSV color threshold in tongue diagnosis
NASA Astrophysics Data System (ADS)
Kamarudin, Nur Diyana; Ooi, Chia Yee; Kawanabe, Tadaaki; Mi, Xiaoyu
2016-07-01
In ISO TC249 conference, tongue diagnosis has been one of the most active research and their objectifications has become significant with the help of numerous statistical and machine learning algorithm. Color information of substance or tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. In order to produce high reproducibility of color measurement analysis, tongue images have to undergo several procedures such as color correction, segmentation and tongue's substance-coating separation. This paper presents a novel method to recognize substance and coating from tongue images and eliminate the tongue coating for accurate substance color measurement for diagnosis. By utilizing Hue, Saturation, Value (HSV) color space, new color-brightness threshold parameters have been devised to improve the efficiency of tongue's substance and coating separation procedures and eliminate shadows. The algorithm offers fast processing time around 0.98 seconds for 60,000 pixels tongue image. The successful tongue's substance and coating separation rate reported is 90% compared to the labelled data verified by the practitioners. Using 300 tongue images, the substance Lab color measurement with small standard deviation had revealed the effectiveness of this proposed method in computerized tongue diagnosis system.
Use of discrete chromatic space to tune the image tone in a color image mosaic
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li
2003-09-01
Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
A novel color image encryption scheme using alternate chaotic mapping structure
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang
2016-07-01
This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.
A new efficient method for color image compression based on visual attention mechanism
NASA Astrophysics Data System (ADS)
Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang
2010-11-01
One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.
Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT
NASA Astrophysics Data System (ADS)
Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep
2014-08-01
In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.
Image enhancement and color constancy for a vehicle-mounted change detection system
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Monnin, David
2016-10-01
Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.
Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo
2010-01-01
Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Image Reconstruction for Hybrid True-Color Micro-CT
Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge
2013-01-01
X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806
Large-scale quantitative analysis of painting arts.
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-11
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Color image encryption using random transforms, phase retrieval, chaotic maps, and diffusion
NASA Astrophysics Data System (ADS)
Annaby, M. H.; Rushdi, M. A.; Nehary, E. A.
2018-04-01
The recent tremendous proliferation of color imaging applications has been accompanied by growing research in data encryption to secure color images against adversary attacks. While recent color image encryption techniques perform reasonably well, they still exhibit vulnerabilities and deficiencies in terms of statistical security measures due to image data redundancy and inherent weaknesses. This paper proposes two encryption algorithms that largely treat these deficiencies and boost the security strength through novel integration of the random fractional Fourier transforms, phase retrieval algorithms, as well as chaotic scrambling and diffusion. We show through detailed experiments and statistical analysis that the proposed enhancements significantly improve security measures and immunity to attacks.
NASA Astrophysics Data System (ADS)
Hirose, Misa; Toyota, Saori; Tsumura, Norimichi
2018-02-01
In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.
Guided color consistency optimization for image mosaicking
NASA Astrophysics Data System (ADS)
Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li
2018-01-01
This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.
NASA Astrophysics Data System (ADS)
Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano
2017-04-01
Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.
Differentiating defects in red oak lumber by discriminant analysis using color, shape, and density
B. H. Bond; D. Earl Kline; Philip A. Araman
2002-01-01
Defect color, shape, and density measures aid in the differentiation of knots, bark pockets, stain/mineral streak, and clearwood in red oak, (Quercus rubra). Various color, shape, and density measures were extracted for defects present in color and X-ray images captured using a color line scan camera and an X-ray line scan detector. Analysis of variance was used to...
Color image analysis of contaminants and bacteria transport in porous media
NASA Astrophysics Data System (ADS)
Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric
1997-10-01
Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.
The application of color display techniques for the analysis of Nimbus infrared radiation data
NASA Technical Reports Server (NTRS)
Allison, L. J.; Cherrix, G. T.; Ausfresser, H.
1972-01-01
A color enhancement system designed for the Applications Technology Satellite (ATS) spin scan experiment has been adapted for the analysis of Nimbus infrared radiation measurements. For a given scene recorded on magnetic tape by the Nimbus scanning radiometers, a virtually unlimited number of color images can be produced at the ATS Operations Control Center from a color selector paper tape input. Linear image interpolation has produced radiation analyses in which each brightness-color interval has a smooth boundary without any mosaic effects. An annotated latitude-longitude gridding program makes it possible to precisely locate geophysical parameters, which permits accurate interpretation of pertinent meteorological, geological, hydrological, and oceanographic features.
Color image analysis technique for measuring of fat in meat: an application for the meat industry
NASA Astrophysics Data System (ADS)
Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla
2001-04-01
Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.
Color Retinal Image Enhancement Based on Luminosity and Contrast Adjustment.
Zhou, Mei; Jin, Kai; Wang, Shaoze; Ye, Juan; Qian, Dahong
2018-03-01
Many common eye diseases and cardiovascular diseases can be diagnosed through retinal imaging. However, due to uneven illumination, image blurring, and low contrast, retinal images with poor quality are not useful for diagnosis, especially in automated image analyzing systems. Here, we propose a new image enhancement method to improve color retinal image luminosity and contrast. A luminance gain matrix, which is obtained by gamma correction of the value channel in the HSV (hue, saturation, and value) color space, is used to enhance the R, G, and B (red, green and blue) channels, respectively. Contrast is then enhanced in the luminosity channel of L * a * b * color space by CLAHE (contrast-limited adaptive histogram equalization). Image enhancement by the proposed method is compared to other methods by evaluating quality scores of the enhanced images. The performance of the method is mainly validated on a dataset of 961 poor-quality retinal images. Quality assessment (range 0-1) of image enhancement of this poor dataset indicated that our method improved color retinal image quality from an average of 0.0404 (standard deviation 0.0291) up to an average of 0.4565 (standard deviation 0.1000). The proposed method is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness. This method of color retinal image enhancement may be employed to assist ophthalmologists in more efficient screening of retinal diseases and in development of improved automated image analysis for clinical diagnosis.
NASA Technical Reports Server (NTRS)
Kruse, F. A.
1985-01-01
The causes of color variations in the green areas on Landsat 4/5-4/6-6/7 (red-blue-green) color-ratio-composite (CRC) images, defined as limonitic areas, were investigated by analyzing the CRC images of the Lordsburg, New Mexico area. The red-blue-green additive color system was mathematically transformed into the cylindrical Munsell color coordinates (hue, saturation, and value), and selected areas were digitally analyzed for color variation. The obtained precise color characteristics were then correlated with properties of surface material. The amount of limonite (L) visible to the sensor was found to be the primary cause of the observed color differences. The visible L is, is turn, affected by the amount of L on the material's surface and by within-pixel mixing of limonitic and nonlimonitic materials. The secondary cause of variation was vegetation density, which shifted CRC hues towards yellow-green, decreased saturation, and increased value.
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit
2015-01-01
Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit
2015-01-01
Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY.
Cukierski, William J; Qi, Xin; Foran, David J
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral "cube" is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l'éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears.
NASA Technical Reports Server (NTRS)
Poulton, C. E. (Principal Investigator); Welch, R. I.
1975-01-01
The author has identified the following significant results. For interpreting a wide range of natural vegetation analogs, S-190A color infrared and the ERTS-1 color composite were consistently more useful than were conventional color or black and white photos. Color infrared was superior for five vegetation analogs while color was superior for only three. The errors in identification appeared to associate more with black and white single band images than with multiband color. For rice crop analogs, spectral and spatial discriminations both contribute to the usefulness of images for data collection. Tests and subjective analyses conducted in this study indicated that the spectral bands exploited in color infrared film were the most useful for agricultural crop analysis. Accuracy of crop identification on any single date of Skylab images was less than that of multidate analysis due to differences in crop calendar, cultural practices used, rice variety, planting date, planting method, water use, fertilization, disease, or mechanical problems.
Image processing analysis of traditional Gestalt vision experiments
NASA Astrophysics Data System (ADS)
McCann, John J.
2002-06-01
In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.
Large-Scale Quantitative Analysis of Painting Arts
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
Evaluation of color grading impact in restoration process of archive films
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Janout, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2016-09-01
Color grading of archive films is a very particular task in the process of their restoration. The ultimate goal of color grading here is to achieve the same look of the movie as intended at the time of its first presentation. The role of the expert restorer, expert group and a digital colorist in this complicated process is to find the optimal settings of the digital color grading system so that the resulting image look is as close as possible to the estimate of the original reference release print adjusted by the expert group of cinematographers. A methodology for subjective assessment of perceived differences between the outcomes of color grading is introduced, and results of a subjective study are presented. Techniques for objective assessment of perceived differences are discussed, and their performance is evaluated using ground truth obtained from the subjective experiment. In particular, a solution based on calibrated digital single-lens reflex camera and subsequent analysis of image features captured from the projection screen is described. The system based on our previous work is further developed so that it can be used for the analysis of projected images. It allows assessing color differences in these images and predict their impact on the perceived difference in image look.
NASA Astrophysics Data System (ADS)
Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.
2018-03-01
In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.
Perceptual distortion analysis of color image VQ-based coding
NASA Astrophysics Data System (ADS)
Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine
1997-04-01
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.
Raina, Abhay; Hennessy, Ricky; Rains, Michael; Allred, James; Hirshburg, Jason M; Diven, Dayna; Markey, Mia K.
2016-01-01
Background Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. Methods We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Results Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. Conclusions We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. PMID:26517973
NASA Astrophysics Data System (ADS)
Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga
1998-04-01
In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.
Process simulation in digital camera system
NASA Astrophysics Data System (ADS)
Toadere, Florin
2012-06-01
The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.
Camacho-Bello, César; Padilla-Vivanco, Alfonso; Toxqui-Quitl, Carina; Báez-Rojas, José Javier
2016-01-01
Abstract. A detailed analysis of the quaternion generic Jacobi-Fourier moments (QGJFMs) for color image description is presented. In order to reach numerical stability, a recursive approach is used during the computation of the generic Jacobi radial polynomials. Moreover, a search criterion is performed to establish the best values for the parameters α and β of the radial Jacobi polynomial families. Additionally, a polar pixel approach is taken into account to increase the numerical accuracy in the calculation of the QGJFMs. To prove the mathematical theory, some color images from optical microscopy and human retina are used. Experiments and results about color image reconstruction are presented. PMID:27014716
Measure the color distribution of a cotton sample using image analysis
USDA-ARS?s Scientific Manuscript database
The most commonly used measurement of cotton color is by the colorimeter principal that reports the sample’s color grade. However, the color distribution and variation within the sample are not reported. Obtaining color distributions of cotton samples will enable a more comprehensive evaluation of...
Intra- and inter-rater reliability of digital image analysis for skin color measurement
Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison
2013-01-01
Background We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Methods Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe® Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor® in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Conclusion Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. PMID:23551208
Intra- and inter-rater reliability of digital image analysis for skin color measurement.
Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison
2013-11-01
We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe(®) Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor(®) in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
An imaging colorimeter for noncontact tissue color mapping.
Balas, C
1997-06-01
There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.
Remky, A; Arend, O; Beausencourt, E; Elsner, A E; Bertram, B
1996-01-01
Retinal vessel diameter is an important parameter in blood flow analysis. Despite modern digital image technology, most clinical studies investigate diameters subjectively using projected fundus slides or negatives. In the present study we used a technique to examine vessel diameters by digital image analysis of color fundus slides. We investigated in a retrospective manner diameter changes in twenty diabetic patients before and after panretinal laser coagulation. Color fundus slides were digitized by a new high resolution scanning device. The resulting images consisted in three channels (red, green, blue). Since vessel contrast was the highest in the green channel, we assessed grey value profiles perpendicular to the vessels in the green channel. Diameters were measured at the half-height of the profile. After panretinal laser coagulation, average venous diameter was decreased, whereas arterial diameter remained unchanged. There was no significant relation between the diameter change and the number of laser burns or the presence of neovascularization. Splitting digitized images into color planes enables objective measurements of retinal diameters in conventional color slides.
Self-Organizing-Map Program for Analyzing Multivariate Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Jacob, Joseph C.; Block, Gary L.; Braverman, Amy J.
2005-01-01
SOM_VIS is a computer program for analysis and display of multidimensional sets of Earth-image data typified by the data acquired by the Multi-angle Imaging Spectro-Radiometer [MISR (a spaceborne instrument)]. In SOM_VIS, an enhanced self-organizing-map (SOM) algorithm is first used to project a multidimensional set of data into a nonuniform three-dimensional lattice structure. The lattice structure is mapped to a color space to obtain a color map for an image. The Voronoi cell-refinement algorithm is used to map the SOM lattice structure to various levels of color resolution. The final result is a false-color image in which similar colors represent similar characteristics across all its data dimensions. SOM_VIS provides a control panel for selection of a subset of suitably preprocessed MISR radiance data, and a control panel for choosing parameters to run SOM training. SOM_VIS also includes a component for displaying the false-color SOM image, a color map for the trained SOM lattice, a plot showing an original input vector in 36 dimensions of a selected pixel from the SOM image, the SOM vector that represents the input vector, and the Euclidean distance between the two vectors.
Comparison and evaluation on image fusion methods for GaoFen-1 imagery
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Zhao, Junqing; Zhang, Ling
2016-10-01
Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.
Classification of pulmonary airway disease based on mucosal color analysis
NASA Astrophysics Data System (ADS)
Suter, Melissa; Reinhardt, Joseph M.; Riker, David; Ferguson, John Scott; McLennan, Geoffrey
2005-04-01
Airway mucosal color changes occur in response to the development of bronchial diseases including lung cancer, cystic fibrosis, chronic bronchitis, emphysema and asthma. These associated changes are often visualized using standard macro-optical bronchoscopy techniques. A limitation to this form of assessment is that the subtle changes that indicate early stages in disease development may often be missed as a result of this highly subjective assessment, especially in inexperienced bronchoscopists. Tri-chromatic CCD chip bronchoscopes allow for digital color analysis of the pulmonary airway mucosa. This form of analysis may facilitate a greater understanding of airway disease response. A 2-step image classification approach is employed: the first step is to distinguish between healthy and diseased bronchoscope images and the second is to classify the detected abnormal images into 1 of 4 possible disease categories. A database of airway mucosal color constructed from healthy human volunteers is used as a standard against which statistical comparisons are made from mucosa with known apparent airway abnormalities. This approach demonstrates great promise as an effective detection and diagnosis tool to highlight potentially abnormal airway mucosa identifying a region possibly suited to further analysis via airway forceps biopsy, or newly developed micro-optical biopsy strategies. Following the identification of abnormal airway images a neural network is used to distinguish between the different disease classes. We have shown that classification of potentially diseased airway mucosa is possible through comparative color analysis of digital bronchoscope images. The combination of the two strategies appears to increase the classification accuracy in addition to greatly decreasing the computational time.
NASA Technical Reports Server (NTRS)
Vu, Duc; Sandor, Michael; Agarwal, Shri
2005-01-01
CSAM Metrology Software Tool (CMeST) is a computer program for analysis of false-color CSAM images of plastic-encapsulated microcircuits. (CSAM signifies C-mode scanning acoustic microscopy.) The colors in the images indicate areas of delamination within the plastic packages. Heretofore, the images have been interpreted by human examiners. Hence, interpretations have not been entirely consistent and objective. CMeST processes the color information in image-data files to detect areas of delamination without incurring inconsistencies of subjective judgement. CMeST can be used to create a database of baseline images of packages acquired at given times for comparison with images of the same packages acquired at later times. Any area within an image can be selected for analysis, which can include examination of different delamination types by location. CMeST can also be used to perform statistical analyses of image data. Results of analyses are available in a spreadsheet format for further processing. The results can be exported to any data-base-processing software.
Separation of specular and diffuse components using tensor voting in color images.
Nguyen, Tam; Vo, Quang Nhat; Yang, Hyung-Jeong; Kim, Soo-Hyung; Lee, Guee-Sang
2014-11-20
Most methods for the detection and removal of specular reflections suffer from nonuniform highlight regions and/or nonconverged artifacts induced by discontinuities in the surface colors, especially when dealing with highly textured, multicolored images. In this paper, a novel noniterative and predefined constraint-free method based on tensor voting is proposed to detect and remove the highlight components of a single color image. The distribution of diffuse and specular pixels in the original image is determined using tensors' saliency analysis, instead of comparing color information among neighbor pixels. The achieved diffuse reflectance distribution is used to remove specularity components. The proposed method is evaluated quantitatively and qualitatively over a dataset of highly textured, multicolor images. The experimental results show that our result outperforms other state-of-the-art techniques.
Analysis of crystalline lens coloration using a black and white charge-coupled device camera.
Sakamoto, Y; Sasaki, K; Kojima, M
1994-01-01
To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.
Color analysis of the human airway wall
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Deepa; McLennan, Geoffrey; Donnelley, Martin; Delsing, Angela; Suter, Melissa; Flaherty, Dawn; Zabner, Joseph; Hoffman, Eric A.; Reinhardt, Joseph M.
2002-04-01
A bronchoscope can be used to examine the mucosal surface of the airways for abnormalities associated with a variety of lung diseases. The diagnosis of these abnormalities through the process of bronchoscopy is based, in part, on changes in airway wall color. Therefore it is important to characterize the normal color inside the airways. We propose a standardized method to calibrate the bronchoscopic imaging system and to tabulate the normal colors of the airway. Our imaging system consists of a Pentium PC and video frame grabber, coupled with a true color bronchoscope. The calibration procedure uses 24 standard color patches. Images of these color patches at three different distances (1, 1.5, and 2 cm) were acquired using the bronchoscope in a darkened room, to assess repeatability and sensitivity to illumination. The images from the bronchoscope are in a device-dependent Red-Green-Blue (RGB) color space, which was converted to a tri-stimulus image and then into a device-independent color space sRGB image by a fixed polynomial transformation. Images were acquired from five normal human volunteer subjects, two cystic fibrosis (CF) patients and one normal heavy smoker subject. The hue and saturation values of regions within the normal airway were tabulated and these values were compared with the values obtained from regions within the airways of the CF patients and the normal heavy smoker. Repeated measurements of the same region in the airways showed no measurable change in hue or saturation.
Smartphone-based colorimetric analysis for detection of saliva alcohol concentration.
Jung, Youngkee; Kim, Jinhee; Awofeso, Olumide; Kim, Huisung; Regnier, Fred; Bae, Euiwon
2015-11-01
A simple device and associated analytical methods are reported. We provide objective and accurate determination of saliva alcohol concentrations using smartphone-based colorimetric imaging. The device utilizes any smartphone with a miniature attachment that positions the sample and provides constant illumination for sample imaging. Analyses of histograms based on channel imaging of red-green-blue (RGB) and hue-saturation-value (HSV) color space provide unambiguous determination of blood alcohol concentration from color changes on sample pads. A smartphone-based sample analysis by colorimetry was developed and tested with blind samples that matched with the training sets. This technology can be adapted to any smartphone and used to conduct color change assays.
NASA Astrophysics Data System (ADS)
Levay, Z. G.
2004-12-01
A new, freely-available accessory for Adobe's widely-used Photoshop image editing software makes it much more convenient to produce presentable images directly from FITS data. It merges a fully-functional FITS reader with an intuitive user interface and includes fully interactive flexibility in scaling data. Techniques for producing attractive images from astronomy data using the FITS plugin will be presented, including the assembly of full-color images. These techniques have been successfully applied to producing colorful images for public outreach with data from the Hubble Space Telescope and other major observatories. Now it is much less cumbersome for students or anyone not experienced with specialized astronomical analysis software, but reasonably familiar with digital photography, to produce useful and attractive images.
Some spectral and spatial characteristics of LANDSAT data
NASA Technical Reports Server (NTRS)
1982-01-01
Activities are provided for: (1) developing insight into the way in which the LANDSAT MSS produces multispectral data; (2) promoting understanding of what a "pixel" means in a LANDSAT image and the implications of the term "mixed pixel"; (3) explaining the concept of spectral signatures; (4) deriving a simple signature for a class or feature by analysis: of the four band images; (5) understanding the production of false color composites; (6) appreciating the use of color additive techniques; (7) preparing Diazo images; and (8) making quick visual identifications of major land cover types by their characteristic gray tones or colors in LANDSAT images.
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
Combining multiple features for color texture classification
NASA Astrophysics Data System (ADS)
Cusano, Claudio; Napoletano, Paolo; Schettini, Raimondo
2016-11-01
The analysis of color and texture has a long history in image analysis and computer vision. These two properties are often considered as independent, even though they are strongly related in images of natural objects and materials. Correlation between color and texture information is especially relevant in the case of variable illumination, a condition that has a crucial impact on the effectiveness of most visual descriptors. We propose an ensemble of hand-crafted image descriptors designed to capture different aspects of color textures. We show that the use of these descriptors in a multiple classifiers framework makes it possible to achieve a very high classification accuracy in classifying texture images acquired under different lighting conditions. A powerful alternative to hand-crafted descriptors is represented by features obtained with deep learning methods. We also show how the proposed combining strategy hand-crafted and convolutional neural networks features can be used together to further improve the classification accuracy. Experimental results on a food database (raw food texture) demonstrate the effectiveness of the proposed strategy.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY
Cukierski, William J.; Qi, Xin; Foran, David J.
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral “cube” is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l’éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears. PMID:19997528
Roy R. Rosenberger; Carl J. Houtman
2000-01-01
The USPS Image Analysis (IA) protocol recommends the use of hydrophobic dyes to develop contrast between pressure sensitive adhesive (PSA) particles and cellulosic fibers before using a dirt counter to detect all contaminants that have contrast with the handsheet background. Unless the sample contains no contaminants other than those of interest, two measurement steps...
Content-based quality evaluation of color images: overview and proposals
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine
2003-12-01
The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.
False-color representation of electron-density structures of the polar ionosphere
NASA Astrophysics Data System (ADS)
Schlegel, K.
The use of false-color displays to represent EISCAT electron-density measurements for the polar E and F regions is described and demonstrated. Consideration is given to images of a spring sunrise, wavelike structures, the total-electron-content trough, E-region structures, and midnight-sun phenomena. It is suggested that examination of false-color images can facilitate the selection of structures for more detailed analysis.
Change Detection Analysis of Water Pollution in Coimbatore Region using Different Color Models
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Devi, R. Naveena
2017-12-01
The data acquired through remote sensing satellites furnish facts about the land and water at varying resolutions and has been widely used for several change detection studies. Apart from the existence of many change detection methodologies and techniques, emergence of new ones continues to subsist. Existing change detection techniques exploit images that are either in gray scale or RGB color model. In this paper we introduced color models for performing change detection for water pollution. Here the polluted lakes are classified and post-classification change detection techniques are applied to RGB images and results obtained are analysed for changes to exist or not. Furthermore RGB images obtained after classification when converted to any of the two color models YCbCr and YIQ is found to produce the same results as that of the RGB model images. Thus it can be concluded that other color models like YCbCr, YIQ can be used as substitution to RGB color model for analysing change detection with regard to water pollution.
Raina, A; Hennessy, R; Rains, M; Allred, J; Hirshburg, J M; Diven, D G; Markey, M K
2016-08-01
Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Zapotoczny, Piotr; Kozera, Wojciech; Karpiesiuk, Krzysztof; Pawłowski, Rodian
2014-08-01
The effect of management systems on selected physical properties and chemical composition of m. longissimus dorsi was studied in pigs. Muscle texture parameters were determined by computer-assisted image analysis, and the color of muscle samples was evaluated using a spectrophotometer. Highly significant correlations were observed between chemical composition and selected texture variables in the analyzed images. Chemical composition was not correlated with color or spectral distribution. Subject to the applied classification methods and groups of variables included in the classification model, the experimental groups were identified correctly in 35-95%. No significant differences in the chemical composition of m. longissimus dorsi were observed between experimental groups. Significant differences were noted in color lightness (L*) and redness (a*). Copyright © 2014 Elsevier Ltd. All rights reserved.
The Global Color of Pluto from New Horizons
NASA Astrophysics Data System (ADS)
Olkin, Catherine B.; Spencer, John R.; Grundy, William M.; Parker, Alex H.; Beyer, Ross A.; Schenk, Paul M.; Howett, Carly J. A.; Stern, S. Alan; Reuter, Dennis C.; Weaver, Harold A.; Young, Leslie A.; Ennico, Kimberly; Binzel, Richard P.; Buie, Marc W.; Cook, Jason C.; Cruikshank, Dale P.; Dalle Ore, Cristina M.; Earle, Alissa M.; Jennings, Donald E.; Singer, Kelsi N.; Linscott, Ivan E.; Lunsford, Allen W.; Protopapa, Silvia; Schmitt, Bernard; Weigle, Eddie; the New Horizons Science Team
2017-12-01
The New Horizons flyby provided the first high-resolution color maps of Pluto. We present here, for the first time, an analysis of the color of the entire sunlit surface of Pluto and the first quantitative analysis of color and elevation on the encounter hemisphere. These maps show the color variation across the surface from the very red terrain in the equatorial region, to the more neutral colors of the volatile ices in Sputnik Planitia, the blue terrain of East Tombaugh Regio, and the yellow hue on Pluto’s North Pole. There are two distinct color mixing lines in the color-color diagrams derived from images of Pluto. Both mixing lines have an apparent starting point in common: the relatively neutral-color volatile-ice covered terrain. One line extends to the dark red terrain exemplified by Cthulhu Regio and the other extends to the yellow hue in the northern latitudes. There is a latitudinal dependence of the predominant color mixing line with the most red terrain located near the equator, less red distributed at mid-latitudes and more neutral terrain at the North Pole. This is consistent with the seasonal cycle controlling the distribution of colors on Pluto. Additionally, the red color is consistent with tholins. The yellow terrain (in the false color images) located at the northern latitudes occurs at higher elevations.
Estimation of melanin content in iris of human eye: prognosis for glaucoma diagnostics
NASA Astrophysics Data System (ADS)
Bashkatov, Alexey N.; Koblova, Ekaterina V.; Genina, Elina A.; Kamenskikh, Tatyana G.; Dolotov, Leonid E.; Sinichkin, Yury P.; Tuchin, Valery V.
2007-02-01
Based on the experimental data obtained in vivo from digital analysis of color images of human irises, the mean melanin content in human eye irises has been estimated. For registration of the color images a digital camera Olympus C-5060 has been used. The images have been obtained from irises of healthy volunteers as well as from irises of patients with open-angle glaucoma. The computer program has been developed for digital analysis of the images. The result has been useful for development of novel and optimization of already existing methods of non-invasive glaucoma diagnostics.
Region of interest extraction based on multiscale visual saliency analysis for remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan
2015-01-01
Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.
Hyperspectral imaging using a color camera and its application for pathogen detection
USDA-ARS?s Scientific Manuscript database
This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...
White blood cell counting analysis of blood smear images using various segmentation strategies
NASA Astrophysics Data System (ADS)
Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza
2017-09-01
In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.
Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U
2015-01-01
Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.
Yoshioka, Yosuke; Nakayama, Masayoshi; Noguchi, Yuji; Horie, Hideki
2013-01-01
Strawberry is rich in anthocyanins, which are responsible for the red color, and contains several colorless phenolic compounds. Among the colorless phenolic compounds, some, such as hydroxycinammic acid derivatives, emit blue-green fluorescence when excited with ultraviolet (UV) light. Here, we investigated the effectiveness of image analyses for estimating the levels of anthocyanins and UV-excited fluorescent phenolic compounds in fruit. The fruit skin and cut surface of 12 cultivars were photographed under visible and UV light conditions; colors were evaluated based on the color components of images. The levels of anthocyanins and UV-excited fluorescent compounds in each fruit were also evaluated by spectrophotometric and high performance liquid chromatography (HPLC) analyses, respectively and relationships between these levels and the image data were investigated. Red depth of the fruits differed greatly among the cultivars and anthocyanin content was well estimated based on the color values of the cut surface images. Strong UV-excited fluorescence was observed on the cut surfaces of several cultivars, and the grayscale values of the UV-excited fluorescence images were markedly correlated with the levels of those fluorescent compounds as evaluated by HPLC analysis. These results indicate that image analyses can select promising genotypes rich in anthocyanins and fluorescent phenolic compounds. PMID:23853516
Han, J Y; Kim, E J; Lee, H K; Kim, M J; Nam, G W
2015-08-01
This study was conducted to define yellowish skin color, which is a major concern of Asian women, and to develop a 3D skin-pigment color model. A total of 22 Korean females were enrolled in this study. These women were asked to use a functional cosmetic product with whitening agents for 8 weeks. We photographed the subsurface reflection of each subject's face using polarized light. The color of the subsurface reflection is a result of diffusive light transports that are attenuated by various skin pigments such as melanin, hemoglobin, and skin base colors. In this subsurface photo image, we eliminated the color effects of melanin and hemoglobin distribution by skin color analysis resulting in skin base color. Based on a variety of observed skin base colors from which the melanin and hemoglobin pigments have been removed, we defined a standard skin color for the entire subject group, and then, we gained a particular yellowish skin color by excluding the standard skin color from the skin base color again. After applying whitening cosmetic products, the amount of melanin and hemoglobin was reduced by 7.3% and 18.6%, respectively. Also, through using our new analysis method, yellowish skin color has been improved by 2.8%. We showed the improvement on 3D Skin Chroma Diagram(™) three-dimensionally. It became possible to diagnose yellowish color on human skin and to analyze the improvement in skin tone both quantitatively and visually. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
How daylight influences high-order chromatic descriptors in natural images.
Ojeda, Juan; Nieves, Juan Luis; Romero, Javier
2017-07-01
Despite the global and local daylight changes naturally occurring in natural scenes, the human visual system usually adapts quite well to those changes, developing a stable color perception. Nevertheless, the influence of daylight in modeling natural image statistics is not fully understood and has received little attention. The aim of this work was to analyze the influence of daylight changes in different high-order chromatic descriptors (i.e., color volume, color gamut, and number of discernible colors) derived from 350 color images, which were rendered under 108 natural illuminants with Correlated Color Temperatures (CCT) from 2735 to 25,889 K. Results suggest that chromatic and luminance information is almost constant and does not depend on the CCT of the illuminant for values above 14,000 K. Nevertheless, differences between the red-green and blue-yellow image components were found below that CCT, with most of the statistical descriptors analyzed showing local extremes in the range 2950 K-6300 K. Uniform regions and areas of the images attracting observers' attention were also considered in this analysis and were characterized by their patchiness index and their saliency maps. Meanwhile, the results of the patchiness index do not show a clear dependence on CCT, and it is remarkable that a significant reduction in the number of discernible colors (58% on average) was found when the images were masked with their corresponding saliency maps. Our results suggest that chromatic diversity, as defined in terms of the discernible colors, can be strongly reduced when an observer scans a natural scene. These findings support the idea that a reduction in the number of discernible colors will guide visual saliency and attention. Whatever the modeling is mediating the neural representation of natural images, natural image statistics, it is clear that natural image statistics should take into account those local maxima and minima depending on the daylight illumination and the reduction of the number of discernible colors when salient regions are considered.
Facial color processing in the face-selective regions: an fMRI study.
Nakajima, Kae; Minami, Tetsuto; Tanabe, Hiroki C; Sadato, Norihiro; Nakauchi, Shigeki
2014-09-01
Facial color is important information for social communication as it provides important clues to recognize a person's emotion and health condition. Our previous EEG study suggested that N170 at the left occipito-temporal site is related to facial color processing (Nakajima et al., [2012]: Neuropsychologia 50:2499-2505). However, because of the low spatial resolution of EEG experiment, the brain region is involved in facial color processing remains controversial. In the present study, we examined the neural substrates of facial color processing using functional magnetic resonance imaging (fMRI). We measured brain activity from 25 subjects during the presentation of natural- and bluish-colored face and their scrambled images. The bilateral fusiform face (FFA) area and occipital face area (OFA) were localized by the contrast of natural-colored faces versus natural-colored scrambled images. Moreover, region of interest (ROI) analysis showed that the left FFA was sensitive to facial color, whereas the right FFA and the right and left OFA were insensitive to facial color. In combination with our previous EEG results, these data suggest that the left FFA may play an important role in facial color processing. Copyright © 2014 Wiley Periodicals, Inc.
QWT: Retrospective and New Applications
NASA Astrophysics Data System (ADS)
Xu, Yi; Yang, Xiaokang; Song, Li; Traversoni, Leonardo; Lu, Wei
Quaternion wavelet transform (QWT) achieves much attention in recent years as a new image analysis tool. In most cases, it is an extension of the real wavelet transform and complex wavelet transform (CWT) by using the quaternion algebra and the 2D Hilbert transform of filter theory, where analytic signal representation is desirable to retrieve phase-magnitude description of intrinsically 2D geometric structures in a grayscale image. In the context of color image processing, however, it is adapted to analyze the image pattern and color information as a whole unit by mapping sequential color pixels to a quaternion-valued vector signal. This paper provides a retrospective of QWT and investigates its potential use in the domain of image registration, image fusion, and color image recognition. It is indicated that it is important for QWT to induce the mechanism of adaptive scale representation of geometric features, which is further clarified through two application instances of uncalibrated stereo matching and optical flow estimation. Moreover, quaternionic phase congruency model is defined based on analytic signal representation so as to operate as an invariant feature detector for image registration. To achieve better localization of edges and textures in image fusion task, we incorporate directional filter bank (DFB) into the quaternion wavelet decomposition scheme to greatly enhance the direction selectivity and anisotropy of QWT. Finally, the strong potential use of QWT in color image recognition is materialized in a chromatic face recognition system by establishing invariant color features. Extensive experimental results are presented to highlight the exciting properties of QWT.
Park, Young-Jae; Lee, Jin-Moo; Yoo, Seung-Yeon; Park, Young-Bae
2016-04-01
To examine whether color parameters of tongue inspection (TI) using a digital camera was reliable and valid, and to examine which color parameters serve as predictors of symptom patterns in terms of East Asian medicine (EAM). Two hundred female subjects' tongue substances were photographed by a mega-pixel digital camera. Together with the photographs, the subjects were asked to complete Yin deficiency, Phlegm pattern, and Cold-Heat pattern questionnaires. Using three sets of digital imaging software, each digital image was exposure- and white balance-corrected, and finally L* (luminance), a* (red-green balance), and b* (yellow-blue balance) values of the tongues were calculated. To examine intra- and inter-rater reliabilities and criterion validity of the color analysis method, three raters were asked to calculate color parameters for 20 digital image samples. Finally, four hierarchical regression models were formed. Color parameters showed good or excellent reliability (0.627-0.887 for intra-class correlation coefficients) and significant criterion validity (0.523-0.718 for Spearman's correlation). In the hierarchical regression models, age was a significant predictor of Yin deficiency (β = 0.192), and b* value of the tip of the tongue was a determinant predictor of Yin deficiency, Phlegm, and Heat patterns (β = - 0.212, - 0.172, and - 0.163). Luminance (L*) was predictive of Yin deficiency (β = -0.172) and Cold (β = 0.173) pattern. Our results suggest that color analysis of the tongue using the L*a*b* system is reliable and valid, and that color parameters partially serve as symptom pattern predictors in EAM practice.
Improving lip wrinkles: lipstick-related image analysis.
Ryu, Jong-Seong; Park, Sun-Gyoo; Kwak, Taek-Jong; Chang, Min-Youl; Park, Moon-Eok; Choi, Khee-Hwan; Sung, Kyung-Hye; Shin, Hyun-Jong; Lee, Cheon-Koo; Kang, Yun-Seok; Yoon, Moung-Seok; Rang, Moon-Jeong; Kim, Seong-Jin
2005-08-01
The appearance of lip wrinkles is problematic if it is adversely influenced by lipstick make-up causing incomplete color tone, spread phenomenon and pigment remnants. It is mandatory to develop an objective assessment method for lip wrinkle status by which the potential of wrinkle-improving products to lips can be screened. The present study is aimed at finding out the useful parameters from the image analysis of lip wrinkles that is affected by lipstick application. The digital photograph image of lips before and after lipstick application was assessed from 20 female volunteers. Color tone was measured by Hue, Saturation and Intensity parameters, and time-related pigment spread was calculated by the area over vermilion border by image-analysis software (Image-Pro). The efficacy of wrinkle-improving lipstick containing asiaticoside was evaluated from 50 women by using subjective and objective methods including image analysis in a double-blind placebo-controlled fashion. The color tone and spread phenomenon after lipstick make-up were remarkably affected by lip wrinkles. The level of standard deviation by saturation value of image-analysis software was revealed as a good parameter for lip wrinkles. By using the lipstick containing asiaticoside for 8 weeks, the change of visual grading scores and replica analysis indicated the wrinkle-improving effect. As the depth and number of wrinkles were reduced, the lipstick make-up appearance by image analysis also improved significantly. The lip wrinkle pattern together with lipstick make-up can be evaluated by the image-analysis system in addition to traditional assessment methods. Thus, this evaluation system is expected to test the efficacy of wrinkle-reducing lipstick that was not described in previous dermatologic clinical studies.
Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.
Everitt, J H; Yang, C
2007-11-01
A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.
Utility of Digital Stereo Images for Optic Disc Evaluation
Ying, Gui-shuang; Pearson, Denise J.; Bansal, Mayank; Puri, Manika; Miller, Eydie; Alexander, Judith; Piltz-Seymour, Jody; Nyberg, William; Maguire, Maureen G.; Eledath, Jayan; Sawhney, Harpreet
2010-01-01
Purpose. To assess the suitability of digital stereo images for optic disc evaluations in glaucoma. Methods. Stereo color optic disc images in both digital and 35-mm slide film formats were acquired contemporaneously from 29 subjects with various cup-to-disc ratios (range, 0.26–0.76; median, 0.475). Using a grading scale designed to assess image quality, the ease of visualizing optic disc features important for glaucoma diagnosis, and the comparative diameters of the optic disc cup, experienced observers separately compared the primary digital stereo images to each subject's 35-mm slides, to scanned images of the same 35-mm slides, and to grayscale conversions of the digital images. Statistical analysis accounted for multiple gradings and comparisons and also assessed image formats under monoscopic viewing. Results. Overall, the quality of primary digital color images was judged superior to that of 35-mm slides (P < 0.001), including improved stereo (P < 0.001), but the primary digital color images were mostly equivalent to the scanned digitized images of the same slides. Color seemingly added little to grayscale optic disc images, except that peripapillary atrophy was best seen in color (P < 0.0001); both the nerve fiber layer (P < 0.0001) and the paths of blood vessels on the optic disc (P < 0.0001) were best seen in grayscale. The preference for digital over film images was maintained under monoscopic viewing conditions. Conclusions. Digital stereo optic disc images are useful for evaluating the optic disc in glaucoma and allow the application of advanced image processing applications. Grayscale images, by providing luminance distinct from color, may be informative for assessing certain features. PMID:20505199
NASA Astrophysics Data System (ADS)
Li, Zhenjiang; Wang, Weilan
2018-04-01
Thangka is a treasure of Tibetan culture. In its digital protection, most of the current research focuses on the content of Thangka images, not the fabrication process. For silk embroidered Thangka of "Guo Tang", there are two craft methods, namely, weave embroidered and piles embroidered. The local texture of weave embroidered Thangka is rough, and that of piles embroidered Thangka is more smooth. In order to distinguish these two kinds of fabrication processes from images, a effectively segmentation algorithm of color blocks is designed firstly, and the obtained color blocks contain the local texture patterns of Thangka image; Secondly, the local texture features of the color block are extracted and screened; Finally, the selected features are analyzed experimentally. The experimental analysis shows that the proposed features can well reflect the difference between methods of weave embroidered and piles embroidered.
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Knepper, D. H., Jr.; Clark, R. N.
1986-01-01
Techniques using Munsell color transformations were developed for reducing 128 channels (or less) of Airborne Imaging Spectrometer (AIS) data to a single color-composite-image suitable for both visual interpretation and digital analysis. Using AIS data acquired in 1984 and 1985, limestone and dolomite roof pendants and sericite-illite and other clay minerals related to alteration were mapped in a quartz monzonite stock in the northern Grapevine Mountains of California and Nevada. Field studies and laboratory spectral measurements verify the mineralogical distributions mapped from the AIS data.
NASA Astrophysics Data System (ADS)
Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton
2016-04-01
The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes
Erkol, Bulent; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Hvatum, Erik
2011-01-01
Background Malignant melanoma has a good prognosis if treated early. Dermoscopy images of pigmented lesions are most commonly taken at × 10 magnification under lighting at a low angle of incidence while the skin is immersed in oil under a glass plate. Accurate skin lesion segmentation from the background skin is important because some of the features anticipated to be used for diagnosis deal with shape of the lesion and others deal with the color of the lesion compared with the color of the surrounding skin. Methods In this research, gradient vector flow (GVF) snakes are investigated to find the border of skin lesions in dermoscopy images. An automatic initialization method is introduced to make the skin lesion border determination process fully automated. Results Skin lesion segmentation results are presented for 70 benign and 30 melanoma skin lesion images for the GVF-based method and a color histogram analysis technique. The average errors obtained by the GVF-based method are lower for both the benign and melanoma image sets than for the color histogram analysis technique based on comparison with manually segmented lesions determined by a dermatologist. Conclusions The experimental results for the GVF-based method demonstrate promise as an automated technique for skin lesion segmentation in dermoscopy images. PMID:15691255
Techniques for using diazo materials in remote sensor data analysis
NASA Technical Reports Server (NTRS)
Whitebay, L. E.; Mount, S.
1978-01-01
The use of data derived from LANDSAT is facilitated when special products or computer enhanced images can be analyzed. However, the facilities required to produce and analyze such products prevent many users from taking full advantages of the LANDSAT data. A simple, low-cost method is presented by which users can make their own specially enhanced composite images from the four band black and white LANDSAT images by using the diazo process. The diazo process is described and a detailed procedure for making various color composites, such as color infrared, false natural color, and false color, is provided. The advantages and limitations of the diazo process are discussed. A brief discussion interpretation of diazo composites for land use mapping with some typical examples is included.
Patel, Samir N; Klufas, Michael A; Ryan, Michael C; Jonas, Karyn E; Ostmo, Susan; Martinez-Castellanos, Maria Ana; Berrocal, Audina M; Chiang, Michael F; Chan, R V Paul
2015-05-01
To examine the usefulness of fluorescein angiography (FA) in identifying the macular center and diagnosis of zone in patients with retinopathy of prematurity (ROP). Validity and reliability analysis of diagnostic tools. Thirty-two sets (16 color fundus photographs and 16 color fundus photographs paired with the corresponding FA images) of wide-angle retinal images obtained from 16 eyes of 8 infants with ROP were compiled on a secure web site. Nine ROP experts (3 pediatric ophthalmologists and 6 vitreoretinal surgeons) participated in the study. For each image set, experts identified the macular center and provided a diagnosis of zone. (1) Sensitivity and specificity of zone diagnosis and (2) computer-facilitated diagnosis of zone, based on precise measurement of the macular center, optic disc center, and peripheral ROP. Computer-facilitated diagnosis of zone agreed with the expert's diagnosis of zone in 28 (62%) of 45 cases using color fundus photographs and in 31 (69%) of 45 cases using FA images. Mean (95% confidence interval) sensitivity for detection of zone I by experts compared with a consensus reference standard diagnosis when interpreting the color fundus images alone versus interpreting the color fundus photographs and FA images was 47% (range, 35.3% to 59.3%) and 61.1% (range, 48.9% to 72.4%), respectively (t(9) ≥ (2.063); P = .073). There is a marginally significant difference in zone diagnosis when using color fundus photographs compared with using color fundus photographs and the corresponding FA images. There is inconsistency between traditional zone diagnosis (based on ophthalmoscopic examination and image review) compared with a computer-facilitated diagnosis of zone. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodrigo, Ranga P.; Ranaweera, Kamal; Samarabandu, Jagath K.
2004-05-01
Focus of attention is often attributed to biological vision system where the entire field of view is first monitored and then the attention is focused to the object of interest. We propose using a similar approach for object recognition in a color image sequence. The intention is to locate an object based on a prior motive, concentrate on the detected object so that the imaging device can be guided toward it. We use the abilities of the intelligent image analysis framework developed in our laboratory to generate an algorithm dynamically to detect the particular type of object based on the user's object description. The proposed method uses color clustering along with segmentation. The segmented image with labeled regions is used to calculate the shape descriptor parameters. These and the color information are matched with the input description. Gaze is then controlled by issuing camera movement commands as appropriate. We present some preliminary results that demonstrate the success of this approach.
USDA-ARS?s Scientific Manuscript database
Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...
Beyond Correlation: Do Color Features Influence Attention in Rainforest?
Frey, Hans-Peter; Wirz, Kerstin; Willenbockel, Verena; Betz, Torsten; Schreiber, Cornell; Troscianko, Tomasz; König, Peter
2011-01-01
Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red–green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red–green color-contrast. The effects of blue–yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red–green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red–green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion. PMID:21519395
Jitaree, Sirinapa; Phinyomark, Angkoon; Boonyaphiphat, Pleumjit; Phukpattaranont, Pornchai
2015-01-01
Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis. © Wiley Periodicals, Inc.
Iizaka, Shinji; Kaitani, Toshiko; Sugama, Junko; Nakagami, Gojiro; Naito, Ayumi; Koyanagi, Hiroe; Konya, Chizuko; Sanada, Hiromi
2013-01-01
This multicenter prospective cohort study examined the predictive validity of granulation tissue color evaluated by digital image analysis for deep pressure ulcer healing. Ninety-one patients with deep pressure ulcers were followed for 3 weeks. From a wound photograph taken at baseline, an image representing the granulation red index (GRI) was processed in which a redder color represented higher values. We calculated the average GRI over granulation tissue and the proportion of pixels exceeding the threshold intensity of 80 for the granulation tissue surface (%GRI80) and wound surface (%wound red index 80). In the receiver operating characteristics curve analysis, most GRI parameters had adequate discriminative values for both improvement of the DESIGN-R total score and wound closure. Ulcers were categorized by the obtained cutoff points of the average GRI (≤80, >80), %GRI80 (≤55, >55-80, >80%), and %wound red index 80 (≤25, >25-50, >50%). In the linear mixed model, higher classes for all GRI parameters showed significantly greater relative improvement in overall wound severity during the 3 weeks after adjustment for patient characteristics and wound locations. Assessment of granulation tissue color by digital image analysis will be useful as an objective monitoring tool for granulation tissue quality or surrogate outcomes of pressure ulcer healing. © 2012 by the Wound Healing Society.
Milton, N.M.
1983-01-01
Analysis of in situ reflectance spectra of native vegetation was used to interpret airborne MSS data. Representative spectra from three plant species in the E Tintic Mountains, Utah, were used to interpret the color components on a color ratio composite image made from MSS data in the visible and near-infrared regions. A map of plant communities was made from the color ratio composite image and field checked. -from Author
NASA Astrophysics Data System (ADS)
Zharinov, I. O.; Zharinov, O. O.
2017-12-01
The problem of the research is concerned with quantitative analysis of influence of technological variation of the screen color profile parameters on chromaticity coordinates of the displayed image. Some mathematical expressions which approximate the two-dimensional distribution of chromaticity coordinates of an image, which is displayed on the screen with a three-component color formation principle were proposed. Proposed mathematical expressions show the way to development of correction techniques to improve reproducibility of the colorimetric features of displays.
Teaching color measurement in graphic arts
NASA Astrophysics Data System (ADS)
Ingram, Samuel T.; Simon, Frederick T.
1997-04-01
The production of color images has grown in recent years due to the impact of digital technology. Access and equipment affordability are now bringing a new generation of color producers into the marketplace. Many traditional questions concerning color attributes are repeatedly asked by individuals: color fidelity, quality, measurements and device characterization pose daily dilemmas. Curriculum components should be offered in an educational environment that enhance the color foundations required of knowledgeable managers, researchers and technicians. The printing industry is adding many of the new digital color technologies to their vocabulary pertinent to color production. This paper presents current efforts being made to integrate color knowledge in a four year program of undergraduate study. Specific topics include: color reproduction, device characterization, material characterization and the role of measurements as a linking attribute. This paper also provides information detailing efforts to integrate color specification/measurement and analysis procedures used by students and subsequent application in color image production are provided. A discussion of measurement devices used in the learning environment is also presented. The investigation involves descriptive data on colorants typically used in printing inks and color.
Identification of the Properties of Gum Arabic Used as a Binder in 7.62-mm Ammunition Primers
2010-06-01
Solution - LCC Testing (ATK Task 700) 51 Cartridge - Ballistic Testing (ATK Task 800) 51 ATK Elemental Analysis 52 Moisture Loss and Friability...Hummel sample 7 3 SDT summary for Quadra sample 8 4 Particle size analysis summary for gum arabic samples 9 5 SEM images of Colony gum arabic at 230x...strengths 21 16 Color analysis : Colony after 5.0 hrs 23 17 Color analysis : Hummel after 5.0 hrs 23 18 Color analysis : Brenntag after 5.0 hrs 23 19 Gel
NASA Technical Reports Server (NTRS)
Masuoka, E.; Rose, J.; Quattromani, M.
1981-01-01
Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.
Accommodating multiple illumination sources in an imaging colorimetry environment
NASA Astrophysics Data System (ADS)
Tobin, Kenneth W., Jr.; Goddard, James S., Jr.; Hunt, Martin A.; Hylton, Kathy W.; Karnowski, Thomas P.; Simpson, Marc L.; Richards, Roger K.; Treece, Dale A.
2000-03-01
Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.
Color constancy in natural scenes explained by global image statistics
Foster, David H.; Amano, Kinjiro; Nascimento, Sérgio M. C.
2007-01-01
To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance. PMID:16961965
Color constancy in natural scenes explained by global image statistics.
Foster, David H; Amano, Kinjiro; Nascimento, Sérgio M C
2006-01-01
To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance.
A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping
NASA Astrophysics Data System (ADS)
Saad, Ashraf A.; Shapiro, Linda G.
2008-03-01
Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.
Hyperspectral imaging using a color camera and its application for pathogen detection
NASA Astrophysics Data System (ADS)
Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary
2015-02-01
This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Preparing Colorful Astronomical Images and Illustrations
NASA Astrophysics Data System (ADS)
Levay, Z. G.; Frattare, L. M.
2001-12-01
We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.
Ishii, M; Jones, M; Shiota, T; Yamada, I; Sinclair, B; Heinrich, R S; Yoganathan, A P; Sahn, D J
1998-11-01
The purpose of our study was to determine the temporal variability of regurgitant color Doppler jet areas and the width of the color Doppler imaged vena contracta for evaluating the severity of aortic regurgitation. Twenty-nine hemodynamically different states were obtained pharmacologically in 8 sheep 20 weeks after surgery to produce aortic regurgitation. Aortic regurgitation was quantified by peak and mean regurgitant flow rates, regurgitant stroke volumes, and regurgitant fractions determined using pulmonary and aortic electromagnetic flow probes and meters balanced against each other. The regurgitant jet areas and the widths of color Doppler imaged vena contracta were measured at 4 different times during diastole to determine the temporal variability of this parameter. When measured at 4 different temporal points in diastole, a significant change was observed in the size of the color Doppler imaged regurgitant jet (percent of difference: from 31.1% to 904%; 233% +/- 245%). Simple linear regression analysis between each color jet area at 4 different periods in diastole and flow meter-based severity of the aortic regurgitation showed only weak correlation (0.23 < r < 0.49). In contrast, for most conditions only a slight change was observed in the width of the color Doppler imaged vena contracta during the diastolic regurgitant period (percent of difference, vena contracta: from 2.4% to 12.9%, 5.8% +/- 3.2%). In addition, for each period the width of the color Doppler imaged vena contracta at the 4 different time periods in diastole correlated quite strongly with volumetric measures of the severity of aortic regurgitation (0.81 < r < 0.90) and with the instantaneous flow rate for the corresponding period (0.85 < r < 0.87). Color Doppler imaged vena contracta may provide a simple, practical, and accurate method for quantifying aortic regurgitation, even when using a single frame color Doppler flow mapping image.
Atmospheric ozone and colors of the Antarctic twilight sky.
Lee, Raymond L; Meyer, Wolfgang; Hoeppe, Götz
2011-10-01
Zenith skylight is often distinctly blue during clear civil twilights, and much of this color is due to preferential absorption at longer wavelengths by ozone's Chappuis bands. Because stratospheric ozone is greatly depleted in the austral spring, such decreases could plausibly make Antarctic twilight colors less blue then, including at the zenith. So for several months in 2005, we took digital images of twilight zenith and antisolar skies at Antarctica's Georg von Neumayer Station. Our colorimetric analysis of these images shows only weak correlations between ozone concentration and twilight colors. We also used a spectroradiometer at a midlatitude site to measure zenith twilight spectra and colors. At both locations, spectral extinction by aerosols seems as important as ozone absorption in explaining colors seen throughout the twilight sky.
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
NASA Astrophysics Data System (ADS)
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
CALIPSO: an interactive image analysis software package for desktop PACS workstations
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1990-07-01
The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade
Face detection in color images using skin color, Laplacian of Gaussian, and Euler number
NASA Astrophysics Data System (ADS)
Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek
2010-02-01
In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.
Automated rice leaf disease detection using color image analysis
NASA Astrophysics Data System (ADS)
Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.
2011-06-01
In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Quantum image encryption based on restricted geometric and color transformations
NASA Astrophysics Data System (ADS)
Song, Xian-Hua; Wang, Shen; Abd El-Latif, Ahmed A.; Niu, Xia-Mu
2014-08-01
A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.
Visualization and Analysis of Microtubule Dynamics Using Dual Color-Coded Display of Plus-End Labels
Garrison, Amy K.; Xia, Caihong; Wang, Zheng; Ma, Le
2012-01-01
Investigating spatial and temporal control of microtubule dynamics in live cells is critical to understanding cell morphogenesis in development and disease. Tracking fluorescently labeled plus-end-tracking proteins over time has become a widely used method to study microtubule assembly. Here, we report a complementary approach that uses only two images of these labels to visualize and analyze microtubule dynamics at any given time. Using a simple color-coding scheme, labeled plus-ends from two sequential images are pseudocolored with different colors and then merged to display color-coded ends. Based on object recognition algorithms, these colored ends can be identified and segregated into dynamic groups corresponding to four events, including growth, rescue, catastrophe, and pause. Further analysis yields not only their spatial distribution throughout the cell but also provides measurements such as growth rate and direction for each labeled end. We have validated the method by comparing our results with ground-truth data derived from manual analysis as well as with data obtained using the tracking method. In addition, we have confirmed color-coded representation of different dynamic events by analyzing their history and fate. Finally, we have demonstrated the use of the method to investigate microtubule assembly in cells and provided guidance in selecting optimal image acquisition conditions. Thus, this simple computer vision method offers a unique and quantitative approach to study spatial regulation of microtubule dynamics in cells. PMID:23226282
ERIC Educational Resources Information Center
El-Gazzar, Abdel-Latif I.
The relative effectiveness of digital versus photographic images was examined with 96 college students as subjects. A 2x2 balanced factorial design was employed to test eight hypotheses. The four groups were (1) digitized black and white; (2) digitized pseudocolor; (3) photographic black and white; and (4) photographic realistic color. Findings…
NASA Technical Reports Server (NTRS)
Acker, J. G.; Leptoukh, G.; Kempler, S.; Gregg, W.; Berrick, S.; Zhu, T.; Liu, Z.; Rui, H.; Shen, S.
2004-01-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has taken a major step addressing the challenge of using archived Earth Observing System (EOS) data for regional or global studies by developing an infrastructure with a World Wide Web interface which allows online, interactive, data analysis: the GES DISC Interactive Online Visualization and ANalysis Infrastructure, or "Giovanni." Giovanni provides a data analysis environment that is largely independent of underlying data file format. The Ocean Color Time-Series Project has created an initial implementation of Giovanni using monthly Standard Mapped Image (SMI) data products from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) mission. Giovanni users select geophysical parameters, and the geographical region and time period of interest. The system rapidly generates a graphical or ASCII numerical data output. Currently available output options are: Area plot (averaged or accumulated over any available data period for any rectangular area); Time plot (time series averaged over any rectangular area); Hovmeller plots (image view of any longitude-time and latitude-time cross sections); ASCII output for all plot types; and area plot animations. Future plans include correlation plots, output formats compatible with Geographical Information Systems (GIs), and higher temporal resolution data. The Ocean Color Time-Series Project will produce sensor-independent ocean color data beginning with the Coastal Zone Color Scanner (CZCS) mission and extending through SeaWiFS and Moderate Resolution Imaging Spectroradiometer (MODIS) data sets, and will enable incorporation of Visible/lnfrared Imaging Radiometer Suite (VIIRS) data, which will be added to Giovanni. The first phase of Giovanni will also include tutorials demonstrating the use of Giovanni and collaborative assistance in the development of research projects using the SeaWiFS and Ocean Color Time-Series Project data in the online Laboratory for Ocean Color Users (LOCUS). The synergy of Giovanni with high-quality ocean color data provides users with the ability to investigate a variety of important oceanic phenomena, such as coastal primary productivity related to pelagic fisheries, seasonal patterns and interannual variability, interdependence of atmospheric dust aerosols and harmful algal blooms, and the potential effects of climate change on oceanic productivity.
Color image quality in projection displays: a case study
NASA Astrophysics Data System (ADS)
Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter
2005-01-01
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
Image quality evaluation of color displays using a Fovean color camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro
2007-03-01
This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Color image quality in projection displays: a case study
NASA Astrophysics Data System (ADS)
Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter
2004-10-01
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
Investigation of Terrain Analysis and Classification Methods for Ground Vehicles
2012-08-27
exteroceptive terrain classifier takes exteroceptive sensor data (here, color stereo images of the terrain) as its input and returns terrain class...Mishkin & Laubach, 2006), the rover cannot safely travel beyond the distance it can image with its cameras, which has been as little as 15 meters or...field of view roughly 44°×30°, capturing pairs of color images at 640×480 pixels each (Videre Design, 2001). Range data were extracted from the stereo
Near-Infrared Coloring via a Contrast-Preserving Mapping Model.
Chang-Hwan Son; Xiao-Ping Zhang
2017-11-01
Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.
Effect of multi-wavelength irradiation on color characterization with light-emitting diodes (LEDs)
NASA Astrophysics Data System (ADS)
Park, Hyeong Ju; Song, Woosub; Lee, Byeong-Il; Kim, Hyejin; Kang, Hyun Wook
2017-06-01
In the current study, a multi-wavelength light-emitting diode (LED)-integrated CMOS imaging device was developed to investigate the effect of various wavelengths on multiple color characterization. Various color pigments (black, red, green, and blue) were applied on both white paper and skin phantom surfaces for quantitative analysis. The artificial skin phantoms were made of polydimethylsiloxane (PDMS) mixed with coffee and TiO2 powder to emulate the optical properties of the human dermis. The customized LED-integrated imaging device acquired images of the applied pigments by sequentially irradiating with the LED lights in the order of white, red, green, and blue. Each color pigment induced a lower contrast during illumination by the light with the equivalent color. However, the illumination by light with the complementary (opposite) color increased the signal-to-noise ratio by up to 11-fold due to the formation of a strong contrast ( i.e., red LED = 1.6 ± 0.3 vs. green LED = 19.0 ± 0.6 for red pigment). Detection of color pigments in conjunction with multi-wavelength LEDs can be a simple and reliable technique to estimate variations in the color pigments quantitatively.
A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System
Wu, Xiangjun; Li, Yang; Kurths, Jürgen
2015-01-01
The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602
Shortcomings of low-cost imaging systems for viewing computed radiographs.
Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N
2000-01-01
To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.
NASA Astrophysics Data System (ADS)
Özkan, Mutlu; Çelik, Ömer Faruk; Özyavaş, Aziz
2018-02-01
One of the most appropriate approaches to better understand and interpret geologic evolution of an accretionary complex is to make a detailed geologic map. The fact that ophiolite sequences consist of various rock types may require a unique image processing method to map each ophiolite body. The accretionary complex in the study area is composed mainly of ophiolitic and metamorphic rocks along with epi-ophiolitic sedimentary rocks. This paper attempts to map the Late Cretaceous accretionary complex in detail in northern Sivas (within İzmir-Ankara-Erzincan Suture Zone in Turkey) by the analysis of all of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) bands and field study. The new two hybrid color composite images yield satisfactory results in delineating peridotite, gabbro, basalt, and epi-ophiolitic sedimentary rocks of the accretionary complex in the study area. While the first hybrid color composite image consists of one principle component (PC) and two band ratios (PC1, 3/4, 4/6 in the RGB), the PC5, the original ASTER band 4 and the 3/4 band ratio images were assigned to the RGB colors to generate the second hybrid color composite image. In addition to that, the spectral indices derived from the ASTER thermal infrared (TIR) bands discriminate clearly ultramafic, siliceous, and carbonate rocks from adjacent lithologies at a regional scale. Peridotites with varying degrees of serpentinization illustrated as a single color were best identified in the spectral indices map. Furthermore, the boundaries of ophiolitic rocks based on fieldwork were outlined in detail in some parts of the study area by superimposing the resultant maps of ASTER maps on Google Earth images of finer spatial resolution. Eventually, the encouraging geologic map generated by the image analysis of ASTER data strongly correlates with lithological boundaries from a field survey.
Yi, Chucai; Tian, Yingli
2012-09-01
In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.
Malware analysis using visualized image matrices.
Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.
Farah, Ra'fat I
2016-01-01
The objectives of this in vitro study were: 1) to test the agreement among color coordinate differences and total color difference (ΔL*, ΔC*, Δh°, and ΔE) measurements obtained by digital image analysis (DIA) and spectrophotometer, and 2) to test the reliability of each method for obtaining color differences. A digital camera was used to record standardized images of each of the 15 shade tabs from the IPS e.max shade guide placed edge-to-edge in a phantom head with a reference shade tab. The images were analyzed using image-editing software (Adobe Photoshop) to obtain the color differences between the middle area of each test shade tab and the corresponding area of the reference tab. The color differences for the same shade tab areas were also measured using a spectrophotometer. To assess the reliability, measurements for the 15 shade tabs were repeated twice using the two methods. The Intraclass Correlation Coefficient (ICC) and the Dahlberg index were used to calculate agreement and reliability. The total agreement of the two methods for measuring ΔL*, ΔC*, Δh°, and ΔE, according to the ICC, exceeded 0.82. The Dahlberg indices for ΔL* and ΔE were 2.18 and 2.98, respectively. For the reliability calculation, the ICCs for the DIA and the spectrophotometer ΔE were 0.91 and 0.94, respectively. High agreement was obtained between the DIA and spectrophotometer results for the ΔL*, ΔC*, Δh°, and ΔE measurements. Further, the reliability of the measurements for the spectrophotometer was slightly higher than the reliability of all measurements in the DIA.
Uniform color space analysis of LACIE image products
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.
1979-01-01
The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.
Introduction to Color Imaging Science
NASA Astrophysics Data System (ADS)
Lee, Hsien-Che
2005-04-01
Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.
A real-time error-free color-correction facility for digital consumers
NASA Astrophysics Data System (ADS)
Shaw, Rodney
2008-01-01
It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.
Automated feature extraction in color retinal images by a model based approach.
Li, Huiqi; Chutatape, Opas
2004-02-01
Color retinal photography is an important tool to detect the evidence of various eye diseases. Novel methods to extract the main features in color retinal images have been developed in this paper. Principal component analysis is employed to locate optic disk; A modified active shape model is proposed in the shape detection of optic disk; A fundus coordinate system is established to provide a better description of the features in the retinal images; An approach to detect exudates by the combined region growing and edge detection is proposed. The success rates of disk localization, disk boundary detection, and fovea localization are 99%, 94%, and 100%, respectively. The sensitivity and specificity of exudate detection are 100% and 71%, correspondingly. The success of the proposed algorithms can be attributed to the utilization of the model-based methods. The detection and analysis could be applied to automatic mass screening and diagnosis of the retinal diseases.
NASA Astrophysics Data System (ADS)
Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi
2005-04-01
In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.
Digital color analysis of color-ratio composite LANDSAT scenes. [Nevada
NASA Technical Reports Server (NTRS)
Raines, G. L.
1977-01-01
A method is presented that can be used to calculate approximate Munsell coordinates of the colors produced by making a color composite from three registered images. Applied to the LANDSAT MSS data of the Goldfield, Nevada, area, this method permits precise and quantitative definition of the limonitic areas originally observed in a LANDSAT color ratio composite. In addition, areas of transported limonite can be discriminated from the limonite in the hydrothermally altered areas of the Goldfield mining district. From the analysis, the numerical distinction between limonitic and nonlimonitic ground is generally less than 3% using the LANDSAT bands and as much as 8% in ratios of LANDSAT MSS bands.
Automated biodosimetry using digital image analysis of fluorescence in situ hybridization specimens.
Castleman, K R; Schulze, M; Wu, Q
1997-11-01
Fluorescence in situ hybridization (FISH) of metaphase chromosome spreads is valuable for monitoring the radiation dose to circulating lymphocytes. At low dose levels, the number of cells that must be examined to estimate aberration frequencies is quite large. An automated microscope that can perform this analysis autonomously on suitably prepared specimens promises to make practical the large-scale studies that will be required for biodosimetry in the future. This paper describes such an instrument that is currently under development. We use metaphase specimens in which the five largest chromosomes have been hybridized with different-colored whole-chromosome painting probes. An automated multiband fluorescence microscope locates the spreads and counts the number of chromosome components of each color. Digital image analysis is used to locate and isolate the cells, count chromosome components, and estimate the proportions of abnormal cells. Cells exhibiting more than two chromosomal fragments in any color correspond to a clastogenic event. These automatically derived counts are corrected for statistical bias and used to estimate the overall rate of chromosome breakage. Overlap of fluorophore emission spectra prohibits isolation of the different chromosomes into separate color channels. Image processing effectively isolates each fluorophore to a single monochrome image, simplifying the task of counting chromosome fragments and reducing the error in the algorithm. Using proportion estimation, we remove the bias introduced by counting errors, leaving accuracy restricted by sample size considerations alone.
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
Effect of color coding and subtraction on the accuracy of contrast echocardiography
NASA Technical Reports Server (NTRS)
Pasquet, A.; Greenberg, N.; Brunken, R.; Thomas, J. D.; Marwick, T. H.
1999-01-01
BACKGROUND: Contrast echocardiography may be used to assess myocardial perfusion. However, gray scale assessment of myocardial contrast echocardiography (MCE) is difficult because of variations in regional backscatter intensity, difficulties in distinguishing varying shades of gray, and artifacts or attenuation. We sought to determine whether the assessment of rest myocardial perfusion by MCE could be improved with subtraction and color coding. METHODS AND RESULTS: MCE was performed in 31 patients with previous myocardial infarction with a 2nd generation agent (NC100100, Nycomed AS), using harmonic triggered or continuous imaging and gain settings were kept constant throughout the study. Digitized images were post processed by subtraction of baseline from contrast data and colorized to reflect the intensity of myocardial contrast. Gray scale MCE alone, MCE images combined with baseline and subtracted colorized images were scored independently using a 16 segment model. The presence and severity of myocardial contrast abnormalities were compared with perfusion defined by rest MIBI-SPECT. Segments that were not visualized by continuous (17%) or triggered imaging (14%) after color processing were excluded from further analysis. The specificity of gray scale MCE alone (56%) or MCE combined with baseline 2D (47%) was significantly enhanced by subtraction and color coding (76%, p<0.001) of triggered images. The accuracy of the gray scale approaches (respectively 52% and 47%) was increased to 70% (p<0.001). Similarly, for continuous images, the specificity of gray scale MCE with and without baseline comparison was 23% and 42% respectively, compared with 60% after post processing (p<0.001). The accuracy of colorized images (59%) was also significantly greater than gray scale MCE (43% and 29%, p<0.001). The sensitivity of MCE for both acquisitions was not altered by subtraction. CONCLUSION: Post-processing with subtraction and color coding significantly improves the accuracy and specificity of MCE for detection of perfusion defects.
Image indexing using color correlograms
Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing
2001-01-01
A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.
Overview of geostationary ocean color imager (GOCI) and GOCI data processing system (GDPS)
NASA Astrophysics Data System (ADS)
Ryu, Joo-Hyung; Han, Hee-Jeong; Cho, Seongick; Park, Young-Je; Ahn, Yu-Hwan
2012-09-01
GOCI, the world's first geostationary ocean color satellite, provides images with a spatial resolution of 500 m at hourly intervals up to 8 times a day, allowing observations of short-term changes in the Northeast Asian region. The GOCI Data Processing System (GDPS), a specialized data processing software for GOCI, was developed for real-time generation of various products. This paper describes GOCI characteristics and GDPS workflow/products, so as to enable the efficient utilization of GOCI. To provide quality images and data, atmospheric correction and data analysis algorithms must be improved through continuous Cal/Val. GOCI-II will be developed by 2018 to facilitate in-depth studies on geostationary ocean color satellites.
Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S
2015-10-01
A large percentage of dermatologist׳s decision in psoriasis disease assessment is based on color. The current computer-aided diagnosis systems for psoriasis risk stratification and classification lack the vigor of color paradigm. The paper presents an automated psoriasis computer-aided diagnosis (pCAD) system for classification of psoriasis skin images into psoriatic lesion and healthy skin, which solves the two major challenges: (i) fulfills the color feature requirements and (ii) selects the powerful dominant color features while retaining high classification accuracy. Fourteen color spaces are discovered for psoriasis disease analysis leading to 86 color features. The pCAD system is implemented in a support vector-based machine learning framework where the offline image data set is used for computing machine learning offline color machine learning parameters. These are then used for transformation of the online color features to predict the class labels for healthy vs. diseased cases. The above paradigm uses principal component analysis for color feature selection of dominant features, keeping the original color feature unaltered. Using the cross-validation protocol, the above machine learning protocol is compared against the standalone grayscale features with 60 features and against the combined grayscale and color feature set of 146. Using a fixed data size of 540 images with equal number of healthy and diseased, 10 fold cross-validation protocol, and SVM of polynomial kernel of type two, pCAD system shows an accuracy of 99.94% with sensitivity and specificity of 99.93% and 99.96%. Using a varying data size protocol, the mean classification accuracies for color, grayscale, and combined scenarios are: 92.85%, 93.83% and 93.99%, respectively. The reliability of the system in these three scenarios are: 94.42%, 97.39% and 96.00%, respectively. We conclude that pCAD system using color space alone is compatible to grayscale space or combined color and grayscale spaces. We validated our pCAD system against facial color databases and the results are consistent in accuracy and reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.
Stoecker, William V.; Gupta, Kapil; Stanley, R. Joe; Moss, Randy H.; Shrestha, Bijaya
2011-01-01
Background Dermoscopy, also known as dermatoscopy or epiluminescence microscopy (ELM), is a non-invasive, in vivo technique, which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. One prominent feature useful for melanoma detection in dermoscopy images is the asymmetric blotch (asymmetric structureless area). Method Using both relative and absolute colors, blotches are detected in this research automatically by using thresholds in the red and green color planes. Several blotch indices are computed, including the scaled distance between the largest blotch centroid and the lesion centroid, ratio of total blotch areas to lesion area, ratio of largest blotch area to lesion area, total number of blotches, size of largest blotch, and irregularity of largest blotch. Results The effectiveness of the absolute and relative color blotch features was examined for melanoma/benign lesion discrimination over a dermoscopy image set containing 165 melanomas (151 invasive melanomas and 14 melanomas in situ) and 347 benign lesions (124 nevocellular nevi without dysplasia and 223 dysplastic nevi) using a leave-one-out neural network approach. Receiver operating characteristic curve results are shown, highlighting the sensitivity and specificity of melanoma detection. Statistical analysis of the blotch features are also presented. Conclusion Neural network and statistical analysis showed that the blotch detection method was somewhat more effective using relative color than using absolute color. The relative-color blotch detection method gave a diagnostic accuracy of about 77%. PMID:15998328
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
NASA Astrophysics Data System (ADS)
Gong, Rui; Xu, Haisong; Wang, Binyu; Luo, Ming Ronnier
2012-08-01
The image quality of two active matrix organic light emitting diode (AMOLED) smart-phone displays and two in-plane switching (IPS) ones was visually assessed at two levels of ambient lighting conditions corresponding to indoor and outdoor applications, respectively. Naturalness, colorfulness, brightness, contrast, sharpness, and overall image quality were evaluated via psychophysical experiment by categorical judgment method using test images selected from different application categories. The experimental results show that the AMOLED displays perform better on colorfulness because of their wide color gamut, while the high pixel resolution and high peak luminance of the IPS panels help the perception of brightness, contrast, and sharpness. Further statistical analysis of ANOVA indicates that ambient lighting levels have significant influences on the attributes of brightness and contrast.
Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea
NASA Astrophysics Data System (ADS)
Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.
2018-04-01
Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
NASA Astrophysics Data System (ADS)
Neutze, Donna Lee
Educators, students, and parents are among those who have stereotypical preconceived ideas about science and scientists. The study reports on a content analysis of graphic images in 303 of the "Outstanding Science Trade Books for Students K-12" from the years 1973 through 2005. Using quantitative and qualitative content analysis, all of the images in these books were analyzed according to the presence of humans, the characteristics of those humans (gender, race, age) the style of the graphics, the setting of the images, and the actions performed in the images. The results reveal that Caucasian males are still presented most frequently as scientists. Males appear in more total illustrations than do females (66% to 44%); the main characters are more often male than female (48 to 24); and biographies are most often written about males than females (75% to 25%). Images of Caucasians appear in more books than do people of color (54.5% to 45.5%); Caucasians appear in more total images than do people of color (84.3% to 15.7%); more main characters are Caucasians than people of color (87.5% to 12.5%); and more Caucasians are the subject of biographies than are people of color (72 to 7). Children appear in less than half of the total images, although they make up over 50% of the main characters in the sample. The images found in the sampled texts are wide-ranging as far as the setting in which science takes place; they definitely dispel the stereotype of science only occurring in a laboratory. Moreover, as a body of images, there are illustrations or photographs which capture people engaged in active scientific processes such as making observations, measuring, gathering data and samples, experimenting, and recording information.
Abrupt skin lesion border cutoff measurement for malignancy detection in dermoscopy images.
Kaya, Sertan; Bayraktar, Mustafa; Kockara, Sinan; Mete, Mutlu; Halic, Tansel; Field, Halle E; Wong, Henry K
2016-10-06
Automated skin lesion border examination and analysis techniques have become an important field of research for distinguishing malignant pigmented lesions from benign lesions. An abrupt pigment pattern cutoff at the periphery of a skin lesion is one of the most important dermoscopic features for detection of neoplastic behavior. In current clinical setting, the lesion is divided into a virtual pie with eight sections. Each section is examined by a dermatologist for abrupt cutoff and scored accordingly, which can be tedious and subjective. This study introduces a novel approach to objectively quantify abruptness of pigment patterns along the lesion periphery. In the proposed approach, first, the skin lesion border is detected by the density based lesion border detection method. Second, the detected border is gradually scaled through vector operations. Then, along gradually scaled borders, pigment pattern homogeneities are calculated at different scales. Through this process, statistical texture features are extracted. Moreover, different color spaces are examined for the efficacy of texture analysis. The proposed method has been tested and validated on 100 (31 melanoma, 69 benign) dermoscopy images. Analyzed results indicate that proposed method is efficient on malignancy detection. More specifically, we obtained specificity of 0.96 and sensitivity of 0.86 for malignancy detection in a certain color space. The F-measure, harmonic mean of recall and precision, of the framework is reported as 0.87. The use of texture homogeneity along the periphery of the lesion border is an effective method to detect malignancy of the skin lesion in dermoscopy images. Among different color spaces tested, RGB color space's blue color channel is the most informative color channel to detect malignancy for skin lesions. That is followed by YCbCr color spaces Cr channel, and Cr is closely followed by the green color channel of RGB color space.
Evaluation of color mapping algorithms in different color spaces
NASA Astrophysics Data System (ADS)
Bronner, Timothée.-Florian; Boitard, Ronan; Pourazad, Mahsa T.; Nasiopoulos, Panos; Ebrahimi, Touradj
2016-09-01
The color gamut supported by current commercial displays is only a subset of the full spectrum of colors visible by the human eye. In High-Definition (HD) television technology, the scope of the supported colors covers 35.9% of the full visible gamut. For comparison, Ultra High-Definition (UHD) television, which is currently being deployed on the market, extends this range to 75.8%. However, when reproducing content with a wider color gamut than that of a television, typically UHD content on HD television, some original color information may lie outside the reproduction capabilities of the television. Efficient gamut mapping techniques are required in order to fit the colors of any source content into the gamut of a given display. The goal of gamut mapping is to minimize the distortion, in terms of perceptual quality, when converting video from one color gamut to another. It is assumed that the efficiency of gamut mapping depends on the color space in which it is computed. In this article, we evaluate 14 gamut mapping techniques, 12 combinations of two projection methods across six color spaces as well as R'G'B' Clipping and wrong gamut interpretation. Objective results, using the CIEDE2000 metric, show that the R'G'B' Clipping is slightly outperformed by only one combination of color space and projection method. However, analysis of images shows that R'G'B' Clipping can result in loss of contrast in highly saturated images, greatly impairing the quality of the mapped image.
2000-07-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1348 TITLE: Internet Color Imaging DISTRIBUTION: Approved for public...Paper Internet Color Imaging Hsien-Che Lee Imaging Science and Technology Laboratory Eastman Kodak Company, Rochester, New York 14650-1816 USA...ABSTRACT The sharing and exchange of color images over the Internet pose very challenging problems to color science and technology . Emerging color standards
Zabala-Travers, Silvina; Choi, Mina; Cheng, Wei-Chung
2015-01-01
Purpose: Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images. Methods: Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow (“jet”), a heated black-body (“hot”), and a gray (“gray”) color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone. Results: The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet. Conclusions: Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images. PMID:26127048
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
NASA Astrophysics Data System (ADS)
Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao
2016-12-01
This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.
Color reproduction and processing algorithm based on real-time mapping for endoscopic images.
Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A
2016-01-01
In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.
Color segmentation in the HSI color space using the K-means algorithm
NASA Astrophysics Data System (ADS)
Weeks, Arthur R.; Hague, G. Eric
1997-04-01
Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.
Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study.
Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan
2016-09-19
Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
Influence of imaging resolution on color fidelity in digital archiving.
Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari
2015-11-01
Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Image subregion querying using color correlograms
Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing
2002-01-01
A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.
Multiple Auto-Adapting Color Balancing for Large Number of Images
NASA Astrophysics Data System (ADS)
Zhou, X.
2015-04-01
This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.
Karulin, Alexey Y; Megyesi, Zoltán; Caspell, Richard; Hanson, Jodi; Lehmann, Paul V
2018-01-01
Over the past decade, ELISPOT has become a highly implemented mainstream assay in immunological research, immune monitoring, and vaccine development. Unique single cell resolution along with high throughput potential sets ELISPOT apart from flow cytometry, ELISA, microarray- and bead-based multiplex assays. The necessity to unambiguously identify individual T and B cells that do, or do not co-express certain analytes, including polyfunctional cytokine producing T cells has stimulated the development of multi-color ELISPOT assays. The success of these assays has also been driven by limited sample/cell availability and resource constraints with reagents and labor. There are few commercially available test kits and instruments available at present for multi-color FLUOROSPOT. Beyond commercial descriptions of competing systems, little is known about their accuracy in experimental settings detecting individual cells that secrete multiple analytes vs. random overlays of spots. Here, we present a theoretical and experimental validation study for three and four color T- and B-cell FLUOROSPOT data analysis. The ImmunoSpot ® Fluoro-X™ analysis system we used includes an automatic image acquisition unit that generates individual color images free of spectral overlaps and multi-color spot counting software based on the maximal allowed distance between centers of spots of different colors or Center of Mass Distance (COMD). Using four color B-cell FLUOROSPOT for IgM, IgA, IgG1, IgG3; and three/four color T-cell FLUOROSPOT for IL-2, IFN-γ, TNF-α, and GzB, in serial dilution experiments, we demonstrate the validity and accuracy of Fluoro-X™ multi-color spot counting algorithms. Statistical predictions based on the Poisson spatial distribution, coupled with scrambled image counting, permit objective correction of true multi-color spot counts to exclude randomly overlaid spots.
POLYSITE - An interactive package for the selection and refinement of Landsat image training sites
NASA Technical Reports Server (NTRS)
Mack, Marilyn J. P.
1986-01-01
A versatile multifunction package, POLYSITE, developed for Goddard's Land Analysis System, is described which simplifies the process of interactively selecting and correcting the sites used to study Landsat TM and MSS images. Image switching between the zoomed and nonzoomed image, color and shape cursor change and location display, and bit plane erase or color change, are global functions which are active at all times. Local functions possibly include manipulation of intensive study areas, new site definition, mensuration, and new image copying. The program is illustrated with the example of a full TM maser scene of metropolitan Washington, DC.
5-ALA induced fluorescent image analysis of actinic keratosis
NASA Astrophysics Data System (ADS)
Cho, Yong-Jin; Bae, Youngwoo; Choi, Eung-Ho; Jung, Byungjo
2010-02-01
In this study, we quantitatively analyzed 5-ALA induced fluorescent images of actinic keratosis using digital fluorescent color and hyperspectral imaging modalities. UV-A was utilized to induce fluorescent images and actinic keratosis (AK) lesions were demarcated from surrounding the normal region with different methods. Eight subjects with AK lesion were participated in this study. In the hyperspectral imaging modality, spectral analysis method was utilized for hyperspectral cube image and AK lesions were demarcated from the normal region. Before image acquisition, we designated biopsy position for histopathology of AK lesion and surrounding normal region. Erythema index (E.I.) values on both regions were calculated from the spectral cube data. Image analysis of subjects resulted in two different groups: the first group with the higher fluorescence signal and E.I. on AK lesion than the normal region; the second group with lower fluorescence signal and without big difference in E.I. between two regions. In fluorescent color image analysis of facial AK, E.I. images were calculated on both normal and AK lesions and compared with the results of hyperspectral imaging modality. The results might indicate that the different intensity of fluorescence and E.I. among the subjects with AK might be interpreted as different phases of morphological and metabolic changes of AK lesions.
Beef quality parameters estimation using ultrasound and color images
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452
NASA Astrophysics Data System (ADS)
Choi, Yong-Seok; Cho, Jae-Hwan; Namgung, Jang-Sun; Kim, Hyo-Jin; Yoon, Dae-Young; Lee, Han-Joo
2013-05-01
This study performed a comparative analysis of cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and mean time-to-peak (TTP) obtained by changing the region of interest's (ROI) anatomical positions, during CT brain perfusion. We acquired axial source images of perfusion CT from 20 patients undergoing CT perfusion exams due to brain trauma. Subsequently, the CBV, CBF, MTT, and TTP values were calculated through data-processing of the perfusion CT images. The color scales for the CBV, CBF, MTT, and TTP maps were obtained using the image data. Anterior cerebral artery (ACA) was taken as the standard ROI for the calculations of the perfusion values. Differences in the hemodynamic average values were compared in a quantitative analysis by placing ROI and the dividing axial images into proximal, middle, and distal segments anatomically. By performing the qualitative analysis using a blind test, we observed changes in the sensory characteristics by using the color scales of the CBV, CBF, and MTT maps in the proximal, middle, and distal segments. According to the qualitative analysis, no differences were found in CBV, CBF, MTT, and TTP values of the proximal, middle, and distal segments and no changes were detected in the color scales of the the CBV, CBF, MTT, and TTP maps in the proximal, middle, and distal segments. We anticipate that the results of the study will useful in assessing brain trauma patients using by perfusion imaging.
Color standardization in whole slide imaging using a color calibration slide
Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako
2014-01-01
Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739
Thermal imaging of Al-CuO thermites
NASA Astrophysics Data System (ADS)
Densmore, John; Sullivan, Kyle; Kuntz, Joshua; Gash, Alex
2013-06-01
We have performed spatial in-situ temperature measurements of aluminum-copper oxide thermite reactions using high-speed color pyrometry. Electrophoretic deposition was used to create thermite microstructures. Tests were performed with micron- and nano-sized particles at different stoichiometries. The color pyrometry was performed using a high-speed color camera. The color filter array on the image sensor collects light within three spectral bands. Assuming a gray-body emission spectrum a multi-wavelength ratio analysis allows a temperature to be calculated. An advantage of using a two-dimensional image sensor is that it allows heterogeneous flames to be measured with high spatial resolution. Light from the initial combustion of the Al-CuO can be differentiated from the light created by the late time oxidization with atmosphere. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Automatic color preference correction for color reproduction
NASA Astrophysics Data System (ADS)
Tsukada, Masato; Funayama, Chisato; Tajima, Johji
2000-12-01
The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo
2014-09-01
We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.
The Visi-Chroma VC-100: a new imaging colorimeter for dermatocosmetic research.
Barel, A O; Clarys, P; Alewaeters, K; Duez, C; Hubinon, J L; Mommaerts, M
2001-02-01
It was the aim of this study to carry out a comparative evaluation in vitro on standardized color charts and in vivo on healthy subjects using the Visi-Chroma VC-100, a new imaging tristimulus colorimeter and the Minolta Chromameter CR-200 as a reference instrument. The Visi-Chroma combines tristimulus color analysis with full color visualization of the skin area measured. The technical performances of both instruments were compared with the purpose of validating the use of this new imaging colorimeter in dermatocosmetic research. In vitro L*a*b* color parameters were taken with both instruments on standardized color charts (Macbeth and RAL charts) in order to evaluate accuracy, sensitivity range and repeatability. These measurements were completed by in vivo studies on different sites of human skin and studies of color changes induced by topical chemical agents on forearm skin. The accuracy, sensitivity range and repeatability of measurements of selected distances and surfaces in the measuring zone considered and specific color determinations of specific skin zones were also determined. The technical performance of this imaging colorimeter was rather good, with low coefficients of variation for repeatability of in vitro and vivo color measurements. High positive correlations were established in vitro and in vivo over a wide range of color measurements. The imaging colorimeter was able to measure the L*a*b* color parameters of specific chosen parts of the skin area considered and to measure accurately selected distances and surfaces in the same skin site considered. These comparative measurements show that both instruments have very similar technical performances and that high levels of correlation were obtained in vitro and in vivo using the L*a*b* color parameters. In addition, the Visi-Chroma presents the following improvements: 1) direct visualization and recording of the skin area considered with concomitant color measurements; 2) determination of the specific color parameters of skin areas chosen in the total measuring area; and 3) accurate determination of selected distances and surfaces in the same skin areas chosen.
Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Theotokas, Ioannis; Zoumpoulis, Pavlos; Loupas, Thanasis; Hazle, John D; Kagadis, George C
2017-09-01
The purpose of the present study was to employ a computer-aided diagnosis system that classifies chronic liver disease (CLD) using ultrasound shear wave elastography (SWE) imaging, with a stiffness value-clustering and machine-learning algorithm. A clinical data set of 126 patients (56 healthy controls, 70 with CLD) was analyzed. First, an RGB-to-stiffness inverse mapping technique was employed. A five-cluster segmentation was then performed associating corresponding different-color regions with certain stiffness value ranges acquired from the SWE manufacturer-provided color bar. Subsequently, 35 features (7 for each cluster), indicative of physical characteristics existing within the SWE image, were extracted. A stepwise regression analysis toward feature reduction was used to derive a reduced feature subset that was fed into the support vector machine classification algorithm to classify CLD from healthy cases. The highest accuracy in classification of healthy to CLD subject discrimination from the support vector machine model was 87.3% with sensitivity and specificity values of 93.5% and 81.2%, respectively. Receiver operating characteristic curve analysis gave an area under the curve value of 0.87 (confidence interval: 0.77-0.92). A machine-learning algorithm that quantifies color information in terms of stiffness values from SWE images and discriminates CLD from healthy cases is introduced. New objective parameters and criteria for CLD diagnosis employing SWE images provided by the present study can be considered an important step toward color-based interpretation, and could assist radiologists' diagnostic performance on a daily basis after being installed in a PC and employed retrospectively, immediately after the examination. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Visual perception enhancement for detection of cancerous oral tissue by multi-spectral imaging
NASA Astrophysics Data System (ADS)
Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping
2013-05-01
Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment.
Automatic gang graffiti recognition and interpretation
NASA Astrophysics Data System (ADS)
Parra, Albert; Boutin, Mireille; Delp, Edward J.
2017-09-01
One of the roles of emergency first responders (e.g., police and fire departments) is to prevent and protect against events that can jeopardize the safety and well-being of a community. In the case of criminal gang activity, tools are needed for finding, documenting, and taking the necessary actions to mitigate the problem or issue. We describe an integrated mobile-based system capable of using location-based services, combined with image analysis, to track and analyze gang activity through the acquisition, indexing, and recognition of gang graffiti images. This approach uses image analysis methods for color recognition, image segmentation, and image retrieval and classification. A database of gang graffiti images is described that includes not only the images but also metadata related to the images, such as date and time, geoposition, gang, gang member, colors, and symbols. The user can then query the data in a useful manner. We have implemented these features both as applications for Android and iOS hand-held devices and as a web-based interface.
Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.
Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A
2003-07-01
Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.
Malware Analysis Using Visualized Image Matrices
Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
Development of a novel 2D color map for interactive segmentation of histological images.
Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D
2012-05-01
We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.
Image Transform Based on the Distribution of Representative Colors for Color Deficient
NASA Astrophysics Data System (ADS)
Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru
This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
Graphical Methods for Quantifying Macromolecules through Bright Field Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D.
Bright ?eld imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition, and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to Flbroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color imagesmore » into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quanti?ed by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of samplepreparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions, and (iv) has a superior computing performance« less
NASA Astrophysics Data System (ADS)
Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao
2018-01-01
For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.
NASA Astrophysics Data System (ADS)
Pezoa, Raquel; Salinas, Luis; Torres, Claudio; Härtel, Steffen; Maureira-Fredes, Cristián; Arce, Paola
2016-10-01
Breast cancer is one of the most common cancers in women worldwide. Patient therapy is widely supported by analysis of immunohistochemically (IHC) stained tissue sections. In particular, the analysis of HER2 overexpression by immunohistochemistry helps to determine when patients are suitable to HER2-targeted treatment. Computational HER2 overexpression analysis is still an open problem and a challenging task principally because of the variability of immunohistochemistry tissue samples and the subjectivity of the specialists to assess the samples. In addition, the immunohistochemistry process can produce diverse artifacts that difficult the HER2 overexpression assessment. In this paper we study the segmentation of HER2 overexpression in IHC stained breast cancer tissue images using a support vector machine (SVM) classifier. We asses the SVM performance using diverse color and texture pixel-level features including the RGB, CMYK, HSV, CIE L*a*b* color spaces, color deconvolution filter and Haralick features. We measure classification performance for three datasets containing a total of 153 IHC images that were previously labeled by a pathologist.
Digital Image Display Control System, DIDCS. [for astronomical analysis
NASA Technical Reports Server (NTRS)
Fischel, D.; Klinglesmith, D. A., III
1979-01-01
DIDCS is an interactive image display and manipulation system that is used for a variety of astronomical image reduction and analysis operations. The hardware system consists of a PDP 11/40 main frame with 32K of 16-bit core memory; 96K of 16-bit MOS memory; two 9 track 800 BPI tape drives; eight 2.5 million byte RKO5 type disk packs, three user terminals, and a COMTAL 8000-S display system which has sufficient memory to store and display three 512 x 512 x 8 bit images along with an overlay plane and function table for each image, a pseudo color table and the capability for displaying true color. The software system is based around the language FORTH, which will permit an open ended dictionary of user level words for image analyses and display. A description of the hardware and software systems will be presented along with examples of the types of astronomical research that are being performed. Also a short discussion of the commonality and exchange of this type of image analysis system will be given.
Measurement of meat color using a computer vision system.
Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Braghieri, Ada
2013-01-01
The limits of the colorimeter and a technique of image analysis in evaluating the color of beef, pork, and chicken were investigated. The Minolta CR-400 colorimeter and a computer vision system (CVS) were employed to measure colorimetric characteristics. To evaluate the chromatic fidelity of the image of the sample displayed on the monitor, a similarity test was carried out using a trained panel. The panelists found the digital images of the samples visualized on the monitor very similar to the actual ones (P<0.001). During the first similarity test the panelists observed at the same time both the actual meat sample and the sample image on the monitor in order to evaluate the similarity between them (test A). Moreover, the panelists were asked to evaluate the similarity between two colors, both generated by the software Adobe Photoshop CS3 one using the L, a and b values read by the colorimeter and the other obtained using the CVS (test B); which of the two colors was more similar to the sample visualized on the monitor was also assessed (test C). The panelists found the digital images very similar to the actual samples (P<0.001). As to the similarity (test B) between the CVS- and colorimeter-based colors the panelists found significant differences between them (P<0.001). Test C showed that the color of the sample on the monitor was more similar to the CVS generated color than to the colorimeter generated color. The differences between the values of the L, a, b, hue angle and chroma obtained with the CVS and the colorimeter were statistically significant (P<0.05-0.001). These results showed that the colorimeter did not generate coordinates corresponding to the true color of meat. Instead, the CVS method seemed to give valid measurements that reproduced a color very similar to the real one. Copyright © 2012 Elsevier Ltd. All rights reserved.
Computer-aided diagnostic approach of dermoscopy images acquiring relevant features
NASA Astrophysics Data System (ADS)
Castillejos-Fernández, H.; Franco-Arcega, A.; López-Ortega, O.
2016-09-01
In skin cancer detection, automated analysis of borders, colors, and structures of a lesion relies upon an accurate segmentation process and it is an important first step in any Computer-Aided Diagnosis (CAD) system. However, irregular and disperse lesion borders, low contrast, artifacts in images and variety of colors within the interest region make the problem difficult. In this paper, we propose an efficient approach of automatic classification which considers specific lesion features. First, for the selection of lesion skin we employ the segmentation algorithm W-FCM.1 Then, in the feature extraction stage we consider several aspects: the area of the lesion, which is calculated by correlating axes and we calculate the specific the value of asymmetry in both axes. For color analysis we employ an ensemble of clusterers including K-Means, Fuzzy K-Means and Kohonep maps, all of which estimate the presence of one or more colors defined in ABCD rule and the values for each of the segmented colors. Another aspect to consider is the type of structures that appear in the lesion Those are defined by using the ell-known GLCM method. During the classification stage we compare several methods in order to define if the lesion is benign or malignant. An important contribution of the current approach in segmentation-classification problem resides in the use of information from all color channels together, as well as the measure of each color in the lesion and the axes correlation. The segmentation and classification measures have been performed using sensibility, specificity, accuracy and AUC metric over a set of dermoscopy images from ISDIS data set
Raines, Gary L.; Bretz, R.F.; Shurr, George W.
1979-01-01
From analysis of a color-coded Landsat 5/6 ratio, image, a map of the vegetation density distribution has been produced by Raines of 25,000 sq km of western South Dakota. This 5/6 ratio image is produced digitally calculating the ratios of the bands 5 and 6 of the Landsat data and then color coding these ratios in an image. Bretz and Shurr compared this vegetation density map with published and unpublished data primarily of the U.S. Geological Survey and the South Dakota Geological Survey; good correspondence is seen between this map and existing geologic maps, especially with the soils map. We believe that this Landsat ratio image can be used as a tool to refine existing maps of surficial geology and bedrock, where bedrock is exposed, and to improve mapping accuracy in areas of poor exposure common in South Dakota. In addition, this type of image could be a useful, additional tool in mapping areas that are unmapped.
Objective research on tongue manifestation of patients with eczema.
Yu, Zhifeng; Zhang, Haifang; Fu, Linjie; Lu, Xiaozuo
2017-07-20
Tongue observation often depends on subjective judgment, it is necessary to establish an objective and quantifiable standard for tongue observation. To discuss the features of tongue manifestation of patients who suffered from eczema with different types and to reveal the clinical significance of the tongue images. Two hundred patients with eczema were recruited and divided into three groups according to the diagnostic criteria. Acute group had 47 patients, subacute group had 82 patients, and chronic group had 71 patients. The computerized tongue image digital analysis device was used to detect tongue parameters. The L*a*b* color model was applied to classify tongue parameters quantitatively. For parameters such as tongue color, tongue shape, color of tongue coating, and thickness or thinness of tongue coating, there was a significant difference among acute group, subacute group and chronic group (P< 0.05). For Lab values of both tongue and tongue coating, there was statistical significance among the above types of eczema (P< 0.05). Tongue images can reflect some features of eczema, and different types of eczema may be related to the changes of tongue images. The computerized tongue image digital analysis device can reflect the tongue characteristics of patients with eczema objectively.
Qualitative evaluations and comparisons of six night-vision colorization methods
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul
2013-05-01
Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF < SM < SM-JHM < LUT < JHM < HM. It is anticipated that this work will provide a benchmark for NV colorization and for quantitative evaluation using an objective metric such as objective evaluation index (OEI).
White-Light Optical Information Processing and Holography.
1983-05-03
Processing, White-Light Holography, Image Subtraction, Image Deblurring , Coherence Requirement, Apparent Transfer Function, Source Encoding, Signal...in this period, also demonstrated several color image processing capabilities. Among those are broadband color image deblurring and color image...Broadband Image Deblurring ..... ......... 6 2.5 Color Image Subtraction ............... 7 2.6 Rainbow Holographic Aberrations . . ..... 7 2.7
Distance preservation in color image transforms
NASA Astrophysics Data System (ADS)
Santini, Simone
1999-12-01
Most current image processing systems work on color images, and color is a precious perceptual clue for determining image similarity. Working with color images, however, is not the sam thing as working with images taking values in a 3D Euclidean space. Not only are color spaces bounded, but the characteristics of the observer endow the space with a 'perceptual' metric that in general does not correspond to the metric naturally inherited from R3. This paper studies the problem of filtering color images abstractly. It begins by determining the properties of the color sum and color product operations such that he desirable properties of orthonormal bases will be preserved. The paper then defines a general scheme, based on the action of the additive group on the color space, by which operations that satisfy the required properties can be defined.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Matsunaga, Tomoko M; Ogawa, Daisuke; Taguchi-Shiobara, Fumio; Ishimoto, Masao; Matsunaga, Sachihiro; Habu, Yoshiki
2017-06-01
Leaf color is an important indicator when evaluating plant growth and responses to biotic/abiotic stress. Acquisition of images by digital cameras allows analysis and long-term storage of the acquired images. However, under field conditions, where light intensity can fluctuate and other factors (shade, reflection, and background, etc.) vary, stable and reproducible measurement and quantification of leaf color are hard to achieve. Digital scanners provide fixed conditions for obtaining image data, allowing stable and reliable comparison among samples, but require detached plant materials to capture images, and the destructive processes involved often induce deformation of plant materials (curled leaves and faded colors, etc.). In this study, by using a lightweight digital scanner connected to a mobile computer, we obtained digital image data from intact plant leaves grown in natural-light greenhouses without detaching the targets. We took images of soybean leaves infected by Xanthomonas campestris pv. glycines , and distinctively quantified two disease symptoms (brown lesions and yellow halos) using freely available image processing software. The image data were amenable to quantitative and statistical analyses, allowing precise and objective evaluation of disease resistance.
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-01-01
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-06-28
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.
Enriching text with images and colored light
NASA Astrophysics Data System (ADS)
Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon
2008-01-01
We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.
Automated retinal vessel type classification in color fundus images
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.
2013-02-01
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
A standardised protocol for texture feature analysis of endoscopic images in gynaecological cancer.
Neofytou, Marios S; Tanos, Vasilis; Pattichis, Marios S; Pattichis, Constantinos S; Kyriacou, Efthyvoulos C; Koutsouris, Dimitris D
2007-11-29
In the development of tissue classification methods, classifiers rely on significant differences between texture features extracted from normal and abnormal regions. Yet, significant differences can arise due to variations in the image acquisition method. For endoscopic imaging of the endometrium, we propose a standardized image acquisition protocol to eliminate significant statistical differences due to variations in: (i) the distance from the tissue (panoramic vs close up), (ii) difference in viewing angles and (iii) color correction. We investigate texture feature variability for a variety of targets encountered in clinical endoscopy. All images were captured at clinically optimum illumination and focus using 720 x 576 pixels and 24 bits color for: (i) a variety of testing targets from a color palette with a known color distribution, (ii) different viewing angles, (iv) two different distances from a calf endometrial and from a chicken cavity. Also, human images from the endometrium were captured and analysed. For texture feature analysis, three different sets were considered: (i) Statistical Features (SF), (ii) Spatial Gray Level Dependence Matrices (SGLDM), and (iii) Gray Level Difference Statistics (GLDS). All images were gamma corrected and the extracted texture feature values were compared against the texture feature values extracted from the uncorrected images. Statistical tests were applied to compare images from different viewing conditions so as to determine any significant differences. For the proposed acquisition procedure, results indicate that there is no significant difference in texture features between the panoramic and close up views and between angles. For a calibrated target image, gamma correction provided an acquired image that was a significantly better approximation to the original target image. In turn, this implies that the texture features extracted from the corrected images provided for better approximations to the original images. Within the proposed protocol, for human ROIs, we have found that there is a large number of texture features that showed significant differences between normal and abnormal endometrium. This study provides a standardized protocol for avoiding any significant texture feature differences that may arise due to variability in the acquisition procedure or the lack of color correction. After applying the protocol, we have found that significant differences in texture features will only be due to the fact that the features were extracted from different types of tissue (normal vs abnormal).
An analysis of absorbing image on the Indonesian text by using color matching
NASA Astrophysics Data System (ADS)
Hutagalung, G. A.; Tulus; Iryanto; Lubis, Y. F. A.; Khairani, M.; Suriati
2018-03-01
The insertion of messages in an image is performed by inserting per character message in some pixels. One way of inserting a message into an image is by inserting the ASCII decimal value of a character to the decimal value of the primary color of the image. Messages that use characters in letters, numbers or symbols, where the use of letters of each word is different in number and frequency of use, as well as the use of letters in various messages within each language. In Indonesian language, the use of the letter A to be the most widely used, and the use of other letters greatly affect the clarity of a message or text presented in the language. This study aims to determine the capacity to absorb the message in Indonesian language from an image and what are the things that affect the difference. The data used in this study consists of several images in JPG or JPEG format can be obtained from the image drawing software or hardware of the image makers at different image sizes. The results of testing on four samples of a color image have been obtained by using an image size of 1200 X 1920.
Estimation of Fine-Scale Histologic Features at Low Magnification.
Zarella, Mark D; Quaschnick, Matthew R; Breen, David E; Garcia, Fernando U
2018-06-18
- Whole-slide imaging has ushered in a new era of technology that has fostered the use of computational image analysis for diagnostic support and has begun to transfer the act of analyzing a slide to computer monitors. Due to the overwhelming amount of detail available in whole-slide images, analytic procedures-whether computational or visual-often operate at magnifications lower than the magnification at which the image was acquired. As a result, a corresponding reduction in image resolution occurs. It is unclear how much information is lost when magnification is reduced, and whether the rich color attributes of histologic slides can aid in reconstructing some of that information. - To examine the correspondence between the color and spatial properties of whole-slide images to elucidate the impact of resolution reduction on the histologic attributes of the slide. - We simulated image resolution reduction and modeled its effect on classification of the underlying histologic structure. By harnessing measured histologic features and the intrinsic spatial relationships between histologic structures, we developed a predictive model to estimate the histologic composition of tissue in a manner that exceeds the resolution of the image. - Reduction in resolution resulted in a significant loss of the ability to accurately characterize histologic components at magnifications less than ×10. By utilizing pixel color, this ability was improved at all magnifications. - Multiscale analysis of histologic images requires an adequate understanding of the limitations imposed by image resolution. Our findings suggest that some of these limitations may be overcome with computational modeling.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
Color transfer between high-dynamic-range images
NASA Astrophysics Data System (ADS)
Hristova, Hristina; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi
2015-09-01
Color transfer methods alter the look of a source image with regards to a reference image. So far, the proposed color transfer methods have been limited to low-dynamic-range (LDR) images. Unlike LDR images, which are display-dependent, high-dynamic-range (HDR) images contain real physical values of the world luminance and are able to capture high luminance variations and finest details of real world scenes. Therefore, there exists a strong discrepancy between the two types of images. In this paper, we bridge the gap between the color transfer domain and the HDR imagery by introducing HDR extensions to LDR color transfer methods. We tackle the main issues of applying a color transfer between two HDR images. First, to address the nature of light and color distributions in the context of HDR imagery, we carry out modifications of traditional color spaces. Furthermore, we ensure high precision in the quantization of the dynamic range for histogram computations. As image clustering (based on light and colors) proved to be an important aspect of color transfer, we analyze it and adapt it to the HDR domain. Our framework has been applied to several state-of-the-art color transfer methods. Qualitative experiments have shown that results obtained with the proposed adaptation approach exhibit less artifacts and are visually more pleasing than results obtained when straightforwardly applying existing color transfer methods to HDR images.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image shows the wind eroded deposit in Pollack Crater called 'White Rock'. This image was collected during the Southern Fall Season. Image information: VIS instrument. Latitude -8, Longitude 25.2 East (334.8 West). 0 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Sripathi, Smiti; Mahajan, Abhishek
2013-09-01
To analyze qualitative and quantitative parameters of lung tumors by color Doppler sonography, determine the role of color Doppler sonography in predicting chest wall invasion by lung tumors using spectral waveform analysis, and compare color Doppler sonography and computed tomography (CT) for predicting chest wall invasion by lung tumors. Between March and September 2007, 55 patients with pleuropulmonary lesions on chest radiography were assessed by grayscale and color Doppler sonography for chest wall invasion. Four patients were excluded from the study because of poor acoustic windows. Quantitative and qualitative sonographic examinations of the lesions were performed using grayscale and color Doppler imaging. The correlation between the color Doppler and CT findings was determined, and the final outcomes were correlated with the histopathologic findings. Of a total of 51 lesions, 32 were malignant. Vascularity was present on color Doppler sonography in 28 lesions, and chest wall invasion was documented in 22 cases. Computed tomography was performed in 24 of 28 evaluable malignant lesions, and the findings were correlated with the color Doppler findings for chest wall invasion. Of the 24 patients who underwent CT, 19 showed chest wall invasion. The correlation between the color Doppler and CT findings revealed that color Doppler sonography had sensitivity of 95.6% and specificity of 100% for assessing chest wall invasion, whereas CT had sensitivity of 85.7% and specificity of 66.7%. Combined qualitative and quantitative color Doppler sonography can predict chest wall invasion by lung tumors with better sensitivity and specificity than CT. Although surgery is the reference standard, color Doppler sonography is a readily available, affordable, and noninvasive in vivo diagnostic imaging modality that is complementary to CT and magnetic resonance imaging for lung cancer staging.
Langlois, Neil E I
2010-03-01
Carbon monoxide is a component of motor vehicle exhaust fumes, provided a functional catalytic converter is not present. This gas binds avidly to the hemoglobin molecule in red blood cells preventing its oxygen transport function, effectively poisoning the body by starving it of oxygen. In binding to hemoglobin, carbon monoxide forms carboxyhemoglobin, which has a characteristic bright pink color. It has been remarked that the fingernails of victims of carbon monoxide tend to exhibit pink color, otherwise fingernails of deceased bodies tend towards a dark red to blue color. This study sought to objectively determine by using digital image analysis if a color difference occurred between the fingernails of a group of cadavers with carbon monoxide poisoning compared to a group of controls. The fingernails of the carbon monoxide group did tend to be more red than the controls, but due to overlap between the two groups assessment of the fingernails cannot be recommended as a rapid screening test.
Feasibility of digital image colorimetry--application for water calcium hardness determination.
Lopez-Molinero, Angel; Tejedor Cubero, Valle; Domingo Irigoyen, Rosa; Sipiera Piazuelo, Daniel
2013-01-15
Interpretation and relevance of basic RGB colors in Digital Image-Based Colorimetry have been treated in this paper. The studies were carried out using the chromogenic model formed by the reaction between Ca(II) ions and glyoxal bis(2-hydroxyanil). It produced orange-red colored solutions in alkaline media. Individual basic color data (RGB) and also the total intensity of colors, I(tot), were the original variables treated by Factorial Analysis. Te evaluation evidenced that the highest variance of the system and the highest analytical sensitivity were associated to the G color. However, after the study by Fourier transform the basic R color was recognized as an important feature in the information. It was manifested as an intrinsic characteristic that appeared differentiated in terms of low frequency in Fourier transform. The Principal Components Analysis study showed that the variance of the system could be mostly retained in the first principal component, but was dependent on all basic colors. The colored complex was also applied and validated as a Digital Image Colorimetric method for the determination of Ca(II) ions. RGB intensities were linearly correlated with Ca(II) in the range 0.2-2.0 mg L(-1). In the best conditions, using green color, a simple and reliable method for Ca determination could be developed. Its detection limit was established (criterion 3s) as 0.07 mg L(-1). And the reproducibility was lower than 6%, for 1.0 mg L(-1) Ca. Other chromatic parameters were evaluated as dependent calibration variables. Their representativeness, variance and sensitivity were discussed in order to select the best analytical variable. The potentiality of the procedure as a field and ready-to-use method, susceptible to be applied 'in situ' with a minimum of experimental needs, was probed. Applications of the analysis of Ca in different real water samples were carried out. Water of the city net, mineral bottled, and natural-river were analyzed and results were compared and evaluated statistically. The validity was assessed by the alternative techniques of flame atomic absorption spectroscopy and titrimetry. Differences were appreciated but they were consistent with the applied methods. Copyright © 2012 Elsevier B.V. All rights reserved.
GOIATO, Marcelo Coelho; dos SANTOS, Daniela Micheline; MORENO, Amália; GENNARI-FILHO, Humberto; PELLIZZER, Eduardo Piza
2011-01-01
The use of ocular prostheses for ophthalmic patients aims to rebuild facial aesthetics and provide an artificial substitute to the visual organ. Natural intemperate conditions promote discoloration of artificial irides and many studies have attempted to produce irides with greater chromatic paint durability using different paint materials. Objectives The present study evaluated the color stability of artificial irides obtained with two techniques (oil painting and digital image) and submitted to microwave polymerization. Material and Methods Forty samples were fabricated simulating ocular prostheses. Each sample was constituted by one disc of acrylic resin N1 and one disc of colorless acrylic resin with the iris interposed between the discs. The irides in brown and blue color were obtained by oil painting or digital image. The color stability was determined by a reflection spectrophotometer and measurements were taken before and after microwave polymerization. Statistical analysis of the techniques for reproducing artificial irides was performed by applying the normal data distribution test followed by 2-way ANOVA and Tukey HSD test (α=.05). Results Chromatic alterations occurred in all specimens and statistically significant differences were observed between the oil-painted samples and those obtained by digital imaging. There was no statistical difference between the brown and blue colors. Independently of technique, all samples suffered color alterations after microwave polymerization. Conclusion The digital imaging technique for reproducing irides presented better color stability after microwave polymerization. PMID:21625733
An interactive tool for gamut masking
NASA Astrophysics Data System (ADS)
Song, Ying; Lau, Cheryl; Süsstrunk, Sabine
2014-02-01
Artists often want to change the colors of an image to achieve a particular aesthetic goal. For example, they might limit colors to a warm or cool color scheme to create an image with a certain mood or feeling. Gamut masking is a technique that artists use to limit the set of colors they can paint with. They draw a mask over a color wheel and only use the hues within the mask. However, creating the color palette from the mask and applying the colors to the image requires skill. We propose an interactive tool for gamut masking that allows amateur artists to create an image with a desired mood or feeling. Our system extracts a 3D color gamut from the 2D user-drawn mask and maps the image to this gamut. The user can draw a different gamut mask or locally refine the image colors. Our voxel grid gamut representation allows us to represent gamuts of any shape, and our cluster-based image representation allows the user to change colors locally.
True color scanning laser ophthalmoscopy and optical coherence tomography handheld probe
LaRocca, Francesco; Nankivil, Derek; Farsiu, Sina; Izatt, Joseph A.
2014-01-01
Scanning laser ophthalmoscopes (SLOs) are able to achieve superior contrast and axial sectioning capability compared to fundus photography. However, SLOs typically use monochromatic illumination and are thus unable to extract color information of the retina. Previous color SLO imaging techniques utilized multiple lasers or narrow band sources for illumination, which allowed for multiple color but not “true color” imaging as done in fundus photography. We describe the first “true color” SLO, handheld color SLO, and combined color SLO integrated with a spectral domain optical coherence tomography (OCT) system. To achieve accurate color imaging, the SLO was calibrated with a color test target and utilized an achromatizing lens when imaging the retina to correct for the eye’s longitudinal chromatic aberration. Color SLO and OCT images from volunteers were then acquired simultaneously with a combined power under the ANSI limit. Images from this system were then compared with those from commercially available SLOs featuring multiple narrow-band color imaging. PMID:25401032
NASA Astrophysics Data System (ADS)
Lang, Jun
2015-03-01
In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.
Simultaneous dual-color fluorescence microscope: a characterization study.
Li, Zheng; Chen, Xiaodong; Ren, Liqiang; Song, Jie; Li, Yuhua; Zheng, Bin; Liu, Hong
2013-01-01
High spatial resolution and geometric accuracy is crucial for chromosomal analysis of clinical cytogenetic applications. High resolution and rapid simultaneous acquisition of multiple fluorescent wavelengths can be achieved by utilizing concurrent imaging with multiple detectors. However, such class of microscopic systems functions differently from traditional fluorescence microscopes. To develop a practical characterization framework to assess and optimize the performance of a high resolution and dual-color fluorescence microscope designed for clinical chromosomal analysis. A dual-band microscopic imaging system utilizes a dichroic mirror, two sets of specially selected optical filters, and two detectors to simultaneously acquire two fluorescent wavelengths. The system's geometric distortion, linearity, the modulation transfer function, and the dual detectors' alignment were characterized. Experiment results show that the geometric distortion at lens periphery is less than 1%. Both fluorescent channels show linear signal responses, but there exists discrepancy between the two due to the detectors' non-uniform response ratio to different wavelengths. In terms of the spatial resolution, the two contrast transfer function curves trend agreeably with the spatial frequency. The alignment measurement allows quantitatively assessing the cameras' alignment. A result image of adjusted alignment is demonstrated to show the reduced discrepancy by using the alignment measurement method. In this paper, we present a system characterization study and its methods for a specially designed imaging system for clinical cytogenetic applications. The presented characterization methods are not only unique to this dual-color imaging system but also applicable to evaluation and optimization of other similar multi-color microscopic image systems for improving their clinical utilities for future cytogenetic applications.
Portable real-time color night vision
NASA Astrophysics Data System (ADS)
Toet, Alexander; Hogervorst, Maarten A.
2008-03-01
We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Miyake, Masahiro; Yamashiro, Kenji; Akagi-Kurashige, Yumiko; Oishi, Akio; Tsujikawa, Akitaka; Hangai, Masanori; Yoshimura, Nagahisa
2014-01-01
Purpose To evaluate fundus shape in highly myopic eyes using color maps created through optical coherence tomography (OCT) image analysis. Methods We retrospectively evaluated 182 highly myopic eyes from 113 patients. After obtaining 12 lines of 9-mm radial OCT scans with the fovea at the center, the Bruch’s membrane line was plotted and its curvature was measured at 1-µm intervals in each image, which was reflected as a color topography map. For the quantitative analysis of the eye shape, mean absolute curvature and variance of curvature were calculated. Results The color maps allowed staphyloma visualization as a ring of green color at the edge and as that of orange-red color at the bottom. Analyses of mean and variance of curvature revealed that eyes with myopic choroidal neovascularization tended to have relatively flat posterior poles with smooth surfaces, while eyes with chorioretinal atrophy exhibited a steep, curved shape with an undulated surface (P<0.001). Furthermore, eyes with staphylomas and those without clearly differed in terms of mean curvature and the variance of curvature: 98.4% of eyes with staphylomas had mean curvature ≥7.8×10−5 [1/µm] and variance of curvature ≥0.26×10−8 [1/µm]. Conclusions We established a novel method to analyze posterior pole shape by using OCT images to construct curvature maps. Our quantitative analysis revealed that fundus shape is associated with myopic complications. These values were also effective in distinguishing eyes with staphylomas from those without. This tool for the quantitative evaluation of eye shape should facilitate future research of myopic complications. PMID:25259853
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image continues the northward trend through the Iani Chaos region. Compare this image to Monday's and Tuesday's. This image was collected during the Southern Fall season. Image information: VIS instrument. Latitude -0.1 Longitude 342.6 East (17.4 West). 19 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image is located in a different part of Aureum Chaos. Compare the surface textures with yesterday's image. This image was collected during the Southern Fall season. Image information: VIS instrument. Latitude -4.1, Longitude 333.9 East (26.1 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Joint sparse coding based spatial pyramid matching for classification of color medical image.
Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin
2015-04-01
Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.
Example-Based Image Colorization Using Locality Consistent Sparse Representation.
Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L
2017-11-01
Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.
Tongue Color Analysis for Medical Application
Wang, Xingzheng; You, Jane
2013-01-01
An in-depth systematic tongue color analysis system for medical applications is proposed. Using the tongue color gamut, tongue foreground pixels are first extracted and assigned to one of 12 colors representing this gamut. The ratio of each color for the entire image is calculated and forms a tongue color feature vector. Experimenting on a large dataset consisting of 143 Healthy and 902 Disease (13 groups of more than 10 samples and one miscellaneous group), a given tongue sample can be classified into one of these two classes with an average accuracy of 91.99%. Further testing showed that Disease samples can be split into three clusters, and within each cluster most if not all the illnesses are distinguished from one another. In total 11 illnesses have a classification rate greater than 70%. This demonstrates a relationship between the state of the human body and its tongue color. PMID:23737824
Memory for color reactivates color processing region.
Slotnick, Scott D
2009-11-25
Memory is thought to be constructive in nature, where features processed in different cortical regions are synthesized during retrieval. In an effort to support this constructive memory framework, the present functional magnetic resonance imaging study assessed whether memory for color reactivated color processing regions. During encoding, participants were presented with colored and gray abstract shapes. During retrieval, old and new shapes were presented in gray and participants responded 'old-colored', 'old-gray', or 'new'. Within color perception regions, color memory related activity was observed in the left fusiform gyrus, adjacent to the collateral sulcus. A retinotopic mapping analysis indicated this activity occurred within color processing region V8. The present feature specific evidence provides compelling support for a constructive view of memory.
Colorful Structure at Fine Scales
2017-09-07
These are the highest-resolution color images of any part of Saturn's rings, to date, showing a portion of the inner-central part of the planet's B Ring. The view is a mosaic of two images that show a region that lies between 61,300 and 65,600 miles (98,600 and 105,500 kilometers) from Saturn's center. This image is a natural color composite, created using images taken with red, green and blue spectral filters. The pale tan color is generally not perceptible with the naked eye in telescope views, especially given that Saturn has a similar hue. The material responsible for bestowing this color on the rings -- which are mostly water ice and would otherwise appear white -- is a matter of intense debate among ring scientists that will hopefully be settled by new in-situ observations before the end of Cassini's mission. The different ringlets seen here are part of what is called the "irregular structure" of the B ring. Cassini radio occultations of the rings have shown that these features have extremely sharp boundaries on even smaller scales (radially, or along the direction outward from Saturn) than the camera can resolve here. Closer to Saturn, the irregular structures become fuzzier and more rounded, less opaque, and their color contrast diminishes. The narrow ringlets in the middle of this scene are each about 25 miles (40 kilometers) wide, and the broader bands at right are about 200 to 300 miles (300 to 500 kilometers) across. It remains unclear exactly what causes the variable brightness of these ringlets and bands -- the basic brightness of the ring particles themselves, shadowing on their surfaces, their absolute abundance, and how densely the particles are packed, may all play a role. The second image (Figure 1) is a color-enhanced version. Blue colors represent areas where the spectrum at visible wavelengths is less reddish (meaning the spectrum is flatter toward red wavelengths), while red colors represent areas that are spectrally redder (meaning the spectrum has a steeper spectrum toward red wavelengths). Observations from the Voyager mission and Cassini's visual and infrared mapping spectrometer previously showed these color variations at lower resolution, but it was not known that such well-defined color contrasts would be this sharply defined down to the scale (radial scale) of a couple of miles or kilometers, as seen here. Analysis of additional images from this observation, taken using infrared spectral filters sensitive to absorption of light by water ice, indicates that the areas that appear more visibly reddish in the color-enhanced version are also richer in water ice. The third image (Figure 2) is a composite of the "true" and "enhanced" color images for easy comparison. This image was taken on July 6, 2017, with the Cassini spacecraft narrow-angle camera. The image was acquired on the sunlit side of the rings from a distance of 47,000 miles (76,000 kilometers) away from the area pictured. The image scale is about 2 miles (3 kilometers) per pixel. The phase angle, or sun-ring-spacecraft angle, is 90 degrees. https://photojournal.jpl.nasa.gov/catalog/PIA21628
Regression analysis for LED color detection of visual-MIMO system
NASA Astrophysics Data System (ADS)
Banik, Partha Pratim; Saha, Rappy; Kim, Ki-Doo
2018-04-01
Color detection from a light emitting diode (LED) array using a smartphone camera is very difficult in a visual multiple-input multiple-output (visual-MIMO) system. In this paper, we propose a method to determine the LED color using a smartphone camera by applying regression analysis. We employ a multivariate regression model to identify the LED color. After taking a picture of an LED array, we select the LED array region, and detect the LED using an image processing algorithm. We then apply the k-means clustering algorithm to determine the number of potential colors for feature extraction of each LED. Finally, we apply the multivariate regression model to predict the color of the transmitted LEDs. In this paper, we show our results for three types of environmental light condition: room environmental light, low environmental light (560 lux), and strong environmental light (2450 lux). We compare the results of our proposed algorithm from the analysis of training and test R-Square (%) values, percentage of closeness of transmitted and predicted colors, and we also mention about the number of distorted test data points from the analysis of distortion bar graph in CIE1931 color space.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image of a portion of the Iani Chaos region was collected during the Southern Fall season. Image information: VIS instrument. Latitude -2.6 Longitude 342.4 East (17.6 West). 36 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 12 May 2004 This daytime visible color image was collected on June 6, 2003 during the Southern Spring season near the South Polar Cap Edge. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -77.8, Longitude 195 East (165 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3266, Ourzgan (519) and Moqur (520) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3464, Shahrak (411) and Kasi (412) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3362, Shin-Dand (415) and Tulak (416) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-01-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-10
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.
2017-03-01
2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image of an old channel floor and surrounding highlands is located in the lower reach of Mawrth Valles. This image was collected during the Northern Spring season. Image information: VIS instrument. Latitude 25.7, Longitude 341.2 East (18.8 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
NASA Astrophysics Data System (ADS)
Beltrame, Francesco; Diaspro, Alberto; Fato, Marco; Martin, I.; Ramoino, Paola; Sobel, Irwin E.
1995-03-01
Confocal microscopy systems can be linked to 3D data oriented devices for the interactive navigation of the operator through a 3D object space. Sometimes, such environments are named `virtual reality' or `augmented reality' systems. We consider optical confocal laser scanning microscopy images, in fluorescence with various excitations and emissions, and versus time The aim of our study has been the quantitative spatial analysis of confocal data using the false-color composition technique. Starting from three 2D confocal fluorescent images at the same slice location in a given biological specimen, a new single image representation of all three parameters has been generated by the false-color technique on a HP 9000/735 workstation, connected to the confocal microscope. The color composite result of the mapping of the three parameters is displayed using a resolution of 24 bits per pixel. The operator may independently vary the mix of each of the three components in the false-color composite via three (R, G, B) mixing sliders. Furthermore, by using the pixel data in the three fluorescent component images, a 3D space containing the density distribution of these three parameters has been constructed. The histogram has been displayed in stereo: it can be used for clustering purposes from the operator, through an original thresholding algorithm.
Multispectral analysis tools can increase utility of RGB color images in histology
NASA Astrophysics Data System (ADS)
Fereidouni, Farzad; Griffin, Croix; Todd, Austin; Levenson, Richard
2018-04-01
Multispectral imaging (MSI) is increasingly finding application in the study and characterization of biological specimens. However, the methods typically used come with challenges on both the acquisition and the analysis front. MSI can be slow and photon-inefficient, leading to long imaging times and possible phototoxicity and photobleaching. The resulting datasets can be large and complex, prompting the development of a number of mathematical approaches for segmentation and signal unmixing. We show that under certain circumstances, just three spectral channels provided by standard color cameras, coupled with multispectral analysis tools, including a more recent spectral phasor approach, can efficiently provide useful insights. These findings are supported with a mathematical model relating spectral bandwidth and spectral channel number to achievable spectral accuracy. The utility of 3-band RGB and MSI analysis tools are demonstrated on images acquired using brightfield and fluorescence techniques, as well as a novel microscopy approach employing UV-surface excitation. Supervised linear unmixing, automated non-negative matrix factorization and phasor analysis tools all provide useful results, with phasors generating particularly helpful spectral display plots for sample exploration.
NASA Astrophysics Data System (ADS)
Dunckel, Anne E.; Cardenas, M. Bayani; Sawyer, Audrey H.; Bennett, Philip C.
2009-12-01
Microbial mats have spatially heterogeneous structured communities that manifest visually through vibrant color zonation often associated with environmental gradients. We report the first use of high-resolution thermal infrared imaging to map temperature at four hot springs within the El Tatio Geyser Field, Chile. Thermal images with millimeter resolution show drastic variability and pronounced patterning in temperature, with changes on the order of 30°C within a square decimeter. Paired temperature and visual images show that zones with specific coloration occur within distinct temperature ranges. Unlike previous studies where maximum, minimum, and optimal temperatures for microorganisms are based on isothermally-controlled laboratory cultures, thermal imaging allows for mapping thousands of temperature values in a natural setting. This allows for efficiently constraining natural temperature bounds for visually distinct mat zones. This approach expands current understanding of thermophilic microbial communities and opens doors for detailed analysis of biophysical controls on microbial ecology.
Sun, X; Chen, K J; Berg, E P; Newman, D J; Schwartz, C A; Keller, W L; Maddock Carlin, K R
2014-02-01
The objective was to use digital color image texture features to predict troponin-T degradation in beef. Image texture features, including 88 gray level co-occurrence texture features, 81 two-dimension fast Fourier transformation texture features, and 48 Gabor wavelet filter texture features, were extracted from color images of beef strip steaks (longissimus dorsi, n = 102) aged for 10d obtained using a digital camera and additional lighting. Steaks were designated degraded or not-degraded based on troponin-T degradation determined on d 3 and d 10 postmortem by immunoblotting. Statistical analysis (STEPWISE regression model) and artificial neural network (support vector machine model, SVM) methods were designed to classify protein degradation. The d 3 and d 10 STEPWISE models were 94% and 86% accurate, respectively, while the d 3 and d 10 SVM models were 63% and 71%, respectively, in predicting protein degradation in aged meat. STEPWISE and SVM models based on image texture features show potential to predict troponin-T degradation in meat. © 2013.
Method for radiometric calibration of an endoscope's camera and light source
NASA Astrophysics Data System (ADS)
Rai, Lav; Higgins, William E.
2008-03-01
An endoscope is a commonly used instrument for performing minimally invasive visual examination of the tissues inside the body. A physician uses the endoscopic video images to identify tissue abnormalities. The images, however, are highly dependent on the optical properties of the endoscope and its orientation and location with respect to the tissue structure. The analysis of endoscopic video images is, therefore, purely subjective. Studies suggest that the fusion of endoscopic video images (providing color and texture information) with virtual endoscopic views (providing structural information) can be useful for assessing various pathologies for several applications: (1) surgical simulation, training, and pedagogy; (2) the creation of a database for pathologies; and (3) the building of patient-specific models. Such fusion requires both geometric and radiometric alignment of endoscopic video images in the texture space. Inconsistent estimates of texture/color of the tissue surface result in seams when multiple endoscopic video images are combined together. This paper (1) identifies the endoscope-dependent variables to be calibrated for objective and consistent estimation of surface texture/color and (2) presents an integrated set of methods to measure them. Results show that the calibration method can be successfully used to estimate objective color/texture values for simple planar scenes, whereas uncalibrated endoscopes performed very poorly for the same tests.
Color correction with blind image restoration based on multiple images using a low-rank model
NASA Astrophysics Data System (ADS)
Li, Dong; Xie, Xudong; Lam, Kin-Man
2014-03-01
We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.
Park, Chulhee; Kang, Moon Gi
2016-05-18
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Park, Chulhee; Kang, Moon Gi
2016-01-01
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong
2018-01-01
In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.
Optimal chroma-like channel design for passive color image splicing detection
NASA Astrophysics Data System (ADS)
Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin
2012-12-01
Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image was collected during Southern Fall and shows part of the Aureum Chaos. Image information: VIS instrument. Latitude -3.6, Longitude 332.9 East (27.1 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 13 May 2004 This nighttime visible color image was collected on November 26, 2002 during the Northern Summer season near the North Polar Cap Edge. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude 80, Longitude 43.2 East (316.8 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Edson, D.; Colvocoresses, Alden P.
1973-01-01
Remote-sensor images, including aerial and space photographs, are generally recorded on film, where the differences in density create the image of the scene. With panchromatic and multiband systems the density differences are recorded in shades of gray. On color or color infrared film, with the emulsion containing dyes sensitive to different wavelengths, a color image is created by a combination of color densities. The colors, however, can be separated by filtering or other techniques, and the color image reduced to monochromatic images in which each of the separated bands is recorded as a function of the gray scale.
Brain MR image segmentation using NAMS in pseudo-color.
Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong
2017-12-01
Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.
Blood flow estimation in gastroscopic true-color images
NASA Astrophysics Data System (ADS)
Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans
1995-05-01
The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.
NASA Astrophysics Data System (ADS)
Perner, Petra
2017-03-01
Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.
Automated thermal mapping techniques using chromatic image analysis
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
1989-01-01
Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.
A database system to support image algorithm evaluation
NASA Technical Reports Server (NTRS)
Lien, Y. E.
1977-01-01
The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.
Adaptive color demosaicing and false color removal
NASA Astrophysics Data System (ADS)
Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria
2010-04-01
Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.
Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex
2012-01-01
Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
Color Analysis of Periimplant Soft Tissues Focusing on Implant System: A Case Series.
Varoni, Elena M; Moltrasio, Giuseppe; Gargano, Marco; Ludwig, Nicola; Lodi, Giovanni; Scaringi, Riccardo
2017-04-01
To assess the impact of implant system on color harmonization of periimplant mucosa. In this case series, color of periimplant mucosa was compared with color of natural tooth gingiva. Seventeen intercanine implants were analyzed (11 bone level [BL], 6 tissue level [TL] implants). Colorimetric data, at 2, 4, and 6 mm from gingival margin, were collected through fiber optics reflectance spectroscopy, and color differences calculated as ΔE. Dentists, dental students, and lay people, in blind, performed an additional visual color analysis on clinical images. Independently from implant system, the color of periimplant mucosa was significantly different from gingiva (ΔE = 8.2 ± 0.7), resulting darker at L* comparison (P ≤ 0.05). TL periimplant mucosa showed higher ΔE than BL (9.0 ± 1.0 vs 6.6 ± 0.8, respectively; P ≤ 0.05). Observers correctly identified where the implant was placed in about half of the cases, with no significant difference between implant systems. Within the limitations of this study, the color of periimplant soft tissues appears different from gingiva, at spectroscopic analysis. Color discrepancy results higher in the presence of TL implants than in BL implants, although the difference may not be clinically significant.
Comparison of lossless compression techniques for prepress color images
NASA Astrophysics Data System (ADS)
Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.
1998-12-01
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.
Hyperspectral image reconstruction using RGB color for foodborne pathogen detection on agar plates
NASA Astrophysics Data System (ADS)
Yoon, Seung-Chul; Shin, Tae-Sung; Park, Bosoon; Lawrence, Kurt C.; Heitschmidt, Gerald W.
2014-03-01
This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance spectra measured in the visible and near-infrared spectral range from 400 and 1,000 nm (473 narrow spectral bands). Multivariate regression methods were used to estimate and predict hyperspectral data from RGB color values. The six representative non-O157 Shiga-toxin producing Eschetichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) were grown on Rainbow agar plates. A line-scan pushbroom hyperspectral image sensor was used to scan 36 agar plates grown with pure STEC colonies at each plate. The 36 hyperspectral images of the agar plates were divided in half to create training and test sets. The mean Rsquared value for hyperspectral image estimation was about 0.98 in the spectral range between 400 and 700 nm for linear, quadratic and cubic polynomial regression models and the detection accuracy of the hyperspectral image classification model with the principal component analysis and k-nearest neighbors for the test set was up to 92% (99% with the original hyperspectral images). Thus, the results of the study suggested that color-based detection may be viable as a multispectral imaging solution without much loss of prediction accuracy compared to hyperspectral imaging.
Pires-de-Souza, Fernanda de Carvalho Panzeri; Garcia, Lucas da Fonseca Roberti; Roselino, Lourenço de Moraes Rego; Naves, Lucas Zago
2011-07-01
To assess the in situ color stability, surface and the tooth/restoration interface degradation of a silorane-based composite (P90, 3M ESPE) after accelerated artificial ageing (AAA), in comparison with other dimethacrylate monomer-based composites (Z250/Z350, 3M ESPE and Esthet-X, Dentsply). Class V cavities (25 mm(2) × 2 mm deep) were prepared in 48 bovine incisors, which were randomly allocated into 4 groups of 12 specimens each, according to the type of restorative material used. After polishing, 10 specimens were submitted to initial color readings (Easyshade, Vita) and 2 to analysis by scanning electronic microscopy (SEM). Afterwards, the teeth were submitted to AAA for 384 h, which corresponds to 1 year of clinical use, after which new color readings and microscopic images were obtained. The values obtained for the color analysis were submitted to statistical analysis (1-way ANOVA, Tukey, p<0.05). With regard to color stability, it was verified that all the composites showed color alteration above the clinically acceptable levels (ΔE ≥ 3.3), and that the silorane-based composite showed higher ΔE (18.6), with a statistically significant difference in comparison with the other composites (p<0.05). The SEM images showed small alterations for the dimethacrylate-based composites after AAA and extensive degradation for the silorane-based composite with a rupture at the interface between the matrix/particle. It may be concluded that the silorane-based composite underwent greater alteration with regard to color stability and greater surface and tooth/restoration interface degradation after AAA. Copyright © 2011 Elsevier Ltd. All rights reserved.
Color enhancement of landsat agricultural imagery: JPL LACIE image processing support task
NASA Technical Reports Server (NTRS)
Madura, D. P.; Soha, J. M.; Green, W. B.; Wherry, D. B.; Lewis, S. D.
1978-01-01
Color enhancement techniques were applied to LACIE LANDSAT segments to determine if such enhancement can assist analysis in crop identification. The procedure involved increasing the color range by removing correlation between components. First, a principal component transformation was performed, followed by contrast enhancement to equalize component variances, followed by an inverse transformation to restore familiar color relationships. Filtering was applied to lower order components to reduce color speckle in the enhanced products. Use of single acquisition and multiple acquisition statistics to control the enhancement were compared, and the effects of normalization investigated. Evaluation is left to LACIE personnel.
A study of glasses-type color CGH using a color filter considering reduction of blurring
NASA Astrophysics Data System (ADS)
Iwami, Saki; Sakamoto, Yuji
2009-02-01
We have developed a glasses-type color computer generated hologram (CGH) by using a color filter. The proposed glasses consist of two "lenses" made of overlapping holograms and color filters. The holograms, which are calculated to reconstruct images in each primary color, are divided to small areas, which we called cells, and superimposed on one hologram. In the same way, colors of the filter correspond to the hologram cells. We can configure it very simply without a complex optical system, and the configuration yields a small and light weight system suitable for glasses. When the cell is small enough, the colors are mixed and reconstructed color images are observed. In addition, color expression of reconstruction images improves, too. However, using small cells blurrs reconstructed images because of the following reasons: (1) interference between cells because of the correlation with the cells, and (2) reduction of resolution caused by the size of the cell hologram. We are investigating in order to make a hologram that has high resolution reconstructed color images without ghost images. In this paper, we discuss (1) the details of the proposed glasses-type color CGH, (2) appropriate cell size for an eye system, (3) effects of cell shape on the reconstructed images, and (4) a new method to reduce the blurring of the images.
A robust color image fusion for low light level and infrared images
NASA Astrophysics Data System (ADS)
Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang
2016-09-01
The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.
Luminance contours can gate afterimage colors and "real" colors.
Anstis, Stuart; Vergeer, Mark; Van Lier, Rob
2012-09-06
It has long been known that colored images may elicit afterimages in complementary colors. We have already shown (Van Lier, Vergeer, & Anstis, 2009) that one and the same adapting image may result in different afterimage colors, depending on the test contours presented after the colored image. The color of the afterimage depends on two adapting colors, those both inside and outside the test. Here, we further explore this phenomenon and show that the color-contour interactions shown for afterimage colors also occur for "real" colors. We argue that similar mechanisms apply for both types of stimulation.
Colorimetric consideration of transparencies for a typical LACIE scene
NASA Technical Reports Server (NTRS)
Juday, R. D. (Principal Investigator)
1979-01-01
The production film converter used to produce LACIE imagery is described as well as schemes designed to provide the analyst with operational film products. Two of these products are discussed from the standpoint of color theory. Colorimetric terminology is defined and the mathematical calculations are given. Topics covered include (1) history of product 1 and 3 algorithm development; (2) colorimetric assumptions for product 1 and 3 algorithms; (3) qualitative results from a colorimetric analysis of a typical LACIE scene; and (4) image-to-image color stability.
A robust human face detection algorithm
NASA Astrophysics Data System (ADS)
Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.
2012-01-01
Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.
NASA Technical Reports Server (NTRS)
Giddings, L.; Boston, S.
1976-01-01
A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.
Ganymede in Visible and Infrared Light
NASA Technical Reports Server (NTRS)
2007-01-01
This montage compares New Horizons' best views of Ganymede, Jupiter's largest moon, gathered with the spacecraft's Long Range Reconnaissance Imager (LORRI) and its infrared spectrometer, the Linear Etalon Imaging Spectral Array (LEISA). LEISA observes its targets in more than 200 separate wavelengths of infrared light, allowing detailed analysis of their surface composition. The LEISA image shown here combines just three of these wavelengths -- 1.3, 1.8 and 2.0 micrometers -- to highlight differences in composition across Ganymede's surface. Blue colors represent relatively clean water ice, while brown colors show regions contaminated by dark material. The right panel combines the high-resolution grayscale LORRI image with the color-coded compositional information from the LEISA image, producing a picture that combines the best of both data sets. The LEISA and LORRI images were taken at 9:48 and 10:01 Universal Time, respectively, on February 27, 2007, from a range of 3.5 million kilometers (2.2 million miles). The longitude of the disk center is 38 degrees west. With a diameter of 5,268 kilometers (3,273 miles), Ganymede is the largest satellite in the solar system.NASA Astrophysics Data System (ADS)
El-Saba, A. M.; Alam, M. S.; Surpanani, A.
2006-05-01
Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.
Lightness modification of color image for protanopia and deuteranopia
NASA Astrophysics Data System (ADS)
Tanaka, Go; Suetake, Noriaki; Uchino, Eiji
2010-01-01
In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.
Patel, Samir N.; Klufas, Michael A.; Ryan, Michael C.; Jonas, Karyn E.; Ostmo, Susan; Martinez-Castellanos, Maria Ana; Berrocal, Audina M.; Chiang, Michael F.; Chan, R.V. Paul
2016-01-01
Purpose To examine the utility of fluorescein angiography (FA) in identification of the macular center and the diagnosis of zone in patients with retinopathy of prematurity (ROP). Design Validity and reliability analysis of diagnostic tools Methods 32 sets (16 color fundus photographs; 16 color fundus photographs paired with the corresponding FA) of wide-angle retinal images obtained from 16 eyes of eight infants with ROP were compiled on a secure web site. 9 ROP experts (3 pediatric ophthalmologists; 6 vitreoretinal surgeons) participated in the study. For each image set, experts identified the macular center and provided a diagnosis of zone. Main Outcome Measures (1) Sensitivity and specificity of zone diagnosis (2) “Computer facilitated diagnosis of zone,” based on precise measurement of the macular center, optic disc center, and peripheral ROP. Results Computer facilitated diagnosis of zone agreed with the expert’s diagnosis of zone in 28/45 (62%) cases using color fundus photographs and in 31/45 (69%) cases using FA. Mean (95% CI) sensitivity for detection of zone I by experts as compared to a consensus reference standard diagnosis when interpreting the color fundus images alone versus interpreting the color fundus photographs and FA was 47% (35.3% – 59.3%) and 61.1% (48.9% – 72.4%), respectively, (t(9) ≥ (2.063), p = 0.073). Conclusions There is a marginally significant difference in zone diagnosis when using color fundus photographs compared to using color fundus photographs and the corresponding fluorescein angiograms. There is inconsistency between traditional zone diagnosis (based on ophthalmoscopic exam and image review) compared to a computer-facilitated diagnosis of zone. PMID:25637180
Revisiting measuring colour gamut of the color-reproducing system: interpretation aspects
NASA Astrophysics Data System (ADS)
Sysuev, I. A.; Varepo, L. G.; Trapeznikova, O. V.
2018-04-01
According to the ISO standard, the color gamut body volume is used to evaluate the color reproduction quality. The specified volume describes the number of colors that are in a certain area of the color space. There are ways for evaluating the reproduction quality of a multi-colour image using numerical integration methods, but this approach does not provide high accuracy of the analysis. In this connection, the task of increasing the accuracy of the color reproduction evaluation is still relevant. In order to determine the color mass of a color space area, it is suggested to select the necessary color density values from a map corresponding to a given degree of sampling, excluding its mathematical calculations, which reflects the practical significance and novelty of this solution.
Development of an adaptive bilateral filter for evaluating color image difference
NASA Astrophysics Data System (ADS)
Wang, Zhaohui; Hardeberg, Jon Yngve
2012-04-01
Spatial filtering, which aims to mimic the contrast sensitivity function (CSF) of the human visual system (HVS), has previously been combined with color difference formulae for measuring color image reproduction errors. These spatial filters attenuate imperceptible information in images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were conducted to evaluate the performance of our approach. The experimental sample images were reproduced with variations in six image attributes: lightness, chroma, hue, compression, noise, and sharpness/blurriness. The Pearson's correlation values between the model-predicted image difference and the observed difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and image appearance model.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G
2017-04-07
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.
Multi-color electron microscopy by element-guided identification of cells, organelles and molecules
Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.
2017-01-01
Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimpe, T; Marchessoux, C; Rostang, J
Purpose: Use of color images in medical imaging has increased significantly the last few years. As of today there is no agreed standard on how color information needs to be visualized on medical color displays, resulting into large variability of color appearance and it making consistency and quality assurance a challenge. This paper presents a proposal for an extension of DICOM GSDF towards color. Methods: Visualization needs for several color modalities (multimodality imaging, nuclear medicine, digital pathology, quantitative imaging applications…) have been studied. On this basis a proposal was made for desired color behavior of color medical display systems andmore » its behavior and effect on color medical images was analyzed. Results: Several medical color modalities could benefit from perceptually linear color visualization for similar reasons as why GSDF was put in place for greyscale medical images. An extension of the GSDF (Greyscale Standard Display Function) to color is proposed: CSDF (color standard display function). CSDF is based on deltaE2000 and offers a perceptually linear color behavior. CSDF uses GSDF as its neutral grey behavior. A comparison between sRGB/GSDF and CSDF confirms that CSDF significantly improves perceptual color linearity. Furthermore, results also indicate that because of the improved perceptual linearity, CSDF has the potential to increase perceived contrast of clinically relevant color features. Conclusion: There is a need for an extension of GSDF towards color visualization in order to guarantee consistency and quality. A first proposal (CSDF) for such extension has been made. Behavior of a CSDF calibrated display has been characterized and compared with sRGB/GSDF behavior. First results indicate that CSDF could have a positive influence on perceived contrast of clinically relevant color features and could offer benefits for quantitative imaging applications. Authors are employees of Barco Healthcare.« less
Color in graphic design: an analysis of meaning and trends
NASA Astrophysics Data System (ADS)
Martinson, Barbara; Waldron, Carol C.
2002-06-01
Graphic design is visual communication through the selection, arrangement, and presentation of words and images, most often for the printed page which offer the designer almost limitless options for color use. The objective of this project is to identify patterns of color use. Ethnographic content analysis was used to document color use in annual reports represented in two publications, Print and Communication Arts, 1993-2000. The analysis focuses on the selection, combination, and contrast of hues; and their use with achromatic values. An analysis of the entire sample indicates that one-third of the annual reports used a palette that include black, white, and a hue from quadrant one (red to yellow). Nearly one-fifth of the designs used black, white, and colors from quadrants one and three (cyan to blue). The large samples for Technology, Health Sciences, Financial, and Civic organizations follow the first pattern. Food Service, Business products and services, and Transportation industries favor the second pattern.
Yoon, Bora; Park, In Sung; Shin, Hyora; Park, Hye Jin; Lee, Chan Woo; Kim, Jong-Man
2013-05-14
Inkjet-printed paper-based volatile organic compound (VOC) sensor strips imaged with polydiacetylenes (PDAs) are developed. A microemulsion ink containing bisurethane-substituted diacetylene (DA) monomers, 4BCMU, was inkjet printed onto paper using a conventional inkjet office printer. UV irradiation of the printed image allowed fabrication of blue-colored poly-4BCMU on the paper and the polymer was found to display colorimetric responses to VOCs. Interestingly, a blue-to-yellow color change was observed when the strip was exposed to chloroform vapor, which was accompanied by the generation of green fluorescence. The principal component analysis plot of the color and fluorescence images of the VOC-exposed polymers allowed a more precise discrimination of VOC vapors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Advanced imaging techniques in brain tumors
2009-01-01
Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287
Color difference threshold of chromostereopsis induced by flat display emission.
Ozolinsh, Maris; Muizniece, Kristine
2015-01-01
The study of chromostereopsis has gained attention in the backdrop of the use of computer displays in daily life. In this context, we analyze the illusory depth sense using planar color images presented on a computer screen. We determine the color difference threshold required to induce an illusory sense of depth psychometrically using a constant stimuli paradigm. Isoluminant stimuli are presented on a computer screen, which stimuli are aligned along the blue-red line in the computer display CIE xyY color space. Stereo disparity is generated by increasing the color difference between the central and surrounding areas of the stimuli with both areas consisting of random dots on a black background. The observed altering of illusory depth sense, thus also stereo disparity is validated using the "center-of-gravity" model. The induced illusory sense of the depth effect undergoes color reversal upon varying the binocular lateral eye pupil covering conditions (lateral or medial). Analysis of the retinal image point spread function for the display red and blue pixel radiation validates the altering of chromostereopsis retinal disparity achieved by increasing the color difference, and also the chromostereopsis color reversal caused by varying the eye pupil covering conditions.
Color transfer method preserving perceived lightness
NASA Astrophysics Data System (ADS)
Ueda, Chiaki; Azetsu, Tadahiro; Suetake, Noriaki; Uchino, Eiji
2016-06-01
Color transfer originally proposed by Reinhard et al. is a method to change the color appearance of an input image by using the color information of a reference image. The purpose of this study is to modify color transfer so that it works well even when the scenes of the input and reference images are not similar. Concretely, a color transfer method with lightness correction and color gamut adjustment is proposed. The lightness correction is applied to preserve the perceived lightness which is explained by the Helmholtz-Kohlrausch (H-K) effect. This effect is the phenomenon that vivid colors are perceived as brighter than dull colors with the same lightness. Hence, when the chroma is changed by image processing, the perceived lightness is also changed even if the physical lightness is preserved after the image processing. In the proposed method, by considering the H-K effect, color transfer that preserves the perceived lightness after processing is realized. Furthermore, color gamut adjustment is introduced to address the color gamut problem, which is caused by color space conversion. The effectiveness of the proposed method is verified by performing some experiments.
NASA Astrophysics Data System (ADS)
Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup
2017-06-01
This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.
Optimizing morphology through blood cell image analysis.
Merino, A; Puigví, L; Boldú, L; Alférez, S; Rodellar, J
2018-05-01
Morphological review of the peripheral blood smear is still a crucial diagnostic aid as it provides relevant information related to the diagnosis and is important for selection of additional techniques. Nevertheless, the distinctive cytological characteristics of the blood cells are subjective and influenced by the reviewer's interpretation and, because of that, translating subjective morphological examination into objective parameters is a challenge. The use of digital microscopy systems has been extended in the clinical laboratories. As automatic analyzers have some limitations for abnormal or neoplastic cell detection, it is interesting to identify quantitative features through digital image analysis for morphological characteristics of different cells. Three main classes of features are used as follows: geometric, color, and texture. Geometric parameters (nucleus/cytoplasmic ratio, cellular area, nucleus perimeter, cytoplasmic profile, RBC proximity, and others) are familiar to pathologists, as they are related to the visual cell patterns. Different color spaces can be used to investigate the rich amount of information that color may offer to describe abnormal lymphoid or blast cells. Texture is related to spatial patterns of color or intensities, which can be visually detected and quantitatively represented using statistical tools. This study reviews current and new quantitative features, which can contribute to optimize morphology through blood cell digital image processing techniques. © 2018 John Wiley & Sons Ltd.
The Constancy of Colored After-Images
Zeki, Semir; Cheadle, Samuel; Pepper, Joshua; Mylonas, Dimitris
2017-01-01
We undertook psychophysical experiments to determine whether the color of the after-image produced by viewing a colored patch which is part of a complex multi-colored scene depends on the wavelength-energy composition of the light reflected from that patch. Our results show that it does not. The after-image, just like the color itself, depends on the ratio of light of different wavebands reflected from it and its surrounds. Hence, traditional accounts of after-images as being the result of retinal adaptation or the perceptual result of physiological opponency, are inadequate. We propose instead that the color of after-images is generated after colors themselves are generated in the visual brain. PMID:28539878
Using aerial photography and image analysis to measure changes in giant reed populations
USDA-ARS?s Scientific Manuscript database
A study was conducted along the Rio Grande in southwest Texas to evaluate color-infrared aerial photography combined with supervised image analysis to quantify changes in giant reed (Arundo donax L.) populations over a 6-year period. Aerial photographs from 2002 and 2008 of the same seven study site...
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Sensor fusion of range and reflectance data for outdoor scene analysis
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebvert, Martial; Kanade, Takeo
1988-01-01
In recognizing objects in an outdoor scene, range and reflectance (or color) data provide complementary information. Results of experiments in recognizing outdoor scenes containing roads, trees, and cars are presented. The recognition program uses range and reflectance data obtained by a scanning laser range finder, as well as color data from a color TV camera. After segmentation of each image into primitive regions, models of objects are matched using various properties.
The Rotated Speeded-Up Robust Features Algorithm (R-SURF)
2014-06-01
blue color model YUV one luminance two chrominance color model xviii THIS PAGE INTENTIONALLY LEFT BLANK xix EXECUTIVE SUMMARY Automatic...256 256 3 color scheme with an uncompressed image is used, each visual pixel has a possibility of 3256 combinations 2 [5]. There are...Portugal, 2009. [41] J. Sivic and A. Zisserman, “Efficient visual search of videos cast as text retrieval,” IEEE Transactions on Pattern Analysis and
Brédart, Serge; Cornet, Alyssa; Rakic, Jean-Marie
2014-01-01
Color deficient (dichromat) and normal observers' recognition memory for colored and black-and-white natural scenes was evaluated through several parameters: the rate of recognition, discrimination (A'), response bias (B"D), response confidence, and the proportion of conscious recollections (Remember responses) among hits. At the encoding phase, 36 images of natural scenes were each presented for 1 sec. Half of the images were shown in color and half in black-and-white. At the recognition phase, these 36 pictures were intermixed with 36 new images. The participants' task was to indicate whether an image had been presented or not at the encoding phase, to rate their level of confidence in his her/his response, and in the case of a positive response, to classify the response as a Remember, a Know or a Guess response. Results indicated that accuracy, response discrimination, response bias and confidence ratings were higher for colored than for black-and-white images; this advantage for colored images was similar in both groups of participants. Rates of Remember responses were not higher for colored images than for black-and-white ones, whatever the group. However, interestingly, Remember responses were significantly more often based on color information for colored than for black-and-white images in normal observers only, not in dichromats.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 28 May 2004 This image was collected February 29, 2004 during the end of southern summer season. The local time at the location of the image was about 2 pm. The image shows an area in the South Polar region. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -84.7, Longitude 9.3 East (350.7 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.A novel false color mapping model-based fusion method of visual and infrared images
NASA Astrophysics Data System (ADS)
Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu
2013-12-01
A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.
Ramezani, Alireza; Ahmadieh, Hamid; Azarmina, Mohsen; Soheilian, Masoud; Dehghan, Mohammad H; Mohebbi, Mohammad R
2009-12-01
To evaluate the validity of a new method for the quantitative analysis of fundus or angiographic images using Photoshop 7.0 (Adobe, USA) software by comparing with clinical evaluation. Four hundred and eighteen fundus and angiographic images of diabetic patients were evaluated by three retina specialists and then by computing using Photoshop 7.0 software. Four variables were selected for comparison: amount of hard exudates (HE) on color pictures, amount of HE on red-free pictures, severity of leakage, and the size of the foveal avascular zone (FAZ). The coefficient of agreement (Kappa) between the two methods in the amount of HE on color and red-free photographs were 85% (0.69) and 79% (0.59), respectively. The agreement for severity of leakage was 72% (0.46). In the two methods for the evaluation of the FAZ size using the magic and lasso software tools, the agreement was 54% (0.09) and 89% (0.77), respectively. Agreement in the estimation of the FAZ size by the lasso magnetic tool was excellent and was almost as good in the quantification of HE on color and on red-free images. Considering the agreement of this new technique for the measurement of variables in fundus images using Photoshop software with the clinical evaluation, this method seems to have sufficient validity to be used for the quantitative analysis of HE, leakage, and FAZ size on the angiograms of diabetic patients.
Color Sparse Representations for Image Processing: Review, Models, and Prospects.
Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I
2015-11-01
Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.
Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images
NASA Astrophysics Data System (ADS)
Kruschwitz, Jennifer D. T.
Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.
Detailed mapping of surface units on Mars with HRSC color data
NASA Astrophysics Data System (ADS)
Combe, J.-Ph.; Wendt, L.; McCord, T. B.; Neukum, G.
2008-09-01
Introduction: Making use of HRSC color data Mapping outcrops of clays, sulfates and ferric oxides are basis information to derive the climatic, tectonic and volcanic evolution of Mars, especially the episodes related to the presence of liquid water. The challenge is to resolve spatially the outcrops and to distinguish these components from the globally-driven deposits like the iron oxide-rich bright red dust and the basaltic dark sands. The High Resolution Stereo Camera (HRSC) onboard Mars-Express has five color filters in the visible and near infrared that are designed for visual interpretation and mapping various surface units [1]. It provides also information on the topography at scale smaller than a pixel (roughness) thanks to the different geometry of observation for each color channel. The HRSC dataset is the only one that combines global coverage, 200 m/pixel spatial resolution or better and filtering colors of light. The present abstract is a work in progress (to be submitted to Planetary and Space Science) that shows the potential and limitations of HRSC color data as visual support and as multispectral images. Various methods are described from the most simple to more complex ones in order to demonstrate how to make use of the spectra, because of the specific steps of processing they require [2-4]. The objective is to broaden the popularity of HRSC color data, as they could be used more widely by the scientific community. Results prove that imaging spectrometry and HRSC color data complement each other for mapping outcrops types. Example regions of interest HRSC is theoretically sensitive to materials with absorption features in the visible and near-infrared up to 1 μm. Therefore, oxide-rich red dust and basalts (pyroxenes) can be mapped, as well as very bright components like water ice [5, 6]. Possible detection of other materials still has to be demonstrated. We first explore regions where unusual mineralogy appears clearly from spectral data. Hematite at Aram Chaos or Terra Meridiani [7-9] is a candidate. Bright deposits have potentially spectral signatures different to the red dust in the visible: sulfates in Juventae Chasma or Aram Chaos [9, 10] and phyllosilicates in Mawrth Vallis [11] or Nili Fossae [12] are of interest. This abstract is focused on Marwth Vallis only. HRSC spectral data: geometry and color filters The spectral data are image mosaics of five broadband spectral channels centered respectively at 440, 530, 650 and 750 nm for covering the visible range of wavelengths and 970 nm for sensitivity to the electronic absorptions of minerals present in minerals (pyroxenes, olivine). The third channel (nadir image) has a typical pixel size of 12.5 m, 25 m or 50 m. The other channels have a usual pixel size of 50 m, 100 m or 200 m that determines the spatial sampling of the spectral dataset. These data are acquired by five individual cameras oriented with a specific angle to the normal to the surface (-3°, +3°, 0° (nadir), -16° and +16° respectively). Those tilts optimize the use of a single telescope for all cameras in the available room. Thus, a given spectrum results from different proportions of shade at each wavelength. Indeed, subpixel topographic slopes that are oriented toward the instrument represent a higher proportion in the signal. This implies that shade affects the shape of HRSC spectra on a different way from pixel to pixel. This contribution has to be considered when performing spectral analysis. Level-4 color images in Digital Numbers (DNs) are registered adequately and are available to the public through the HRSCview website (http://hrscview.fu-berlin.de). A linear function converts the DNs into radiance factor (I/F). Visual interpretation Color composites Red-Green-blue (RGB) color composites of DNs images contain usable geological information. Dark basaltic sands and bright red dust appear always obvious. Materials generated from interaction of liquid water, like sulfates and phyllosilicates form generally bright outcrops with complex contour lines that allow visual discrimination, even if this bright color is similar to well-illuminated bright red dust. When the surface is spectrally diverse like Marwth Vallis, contrast enhancement may be sufficient to reveal subtle color differences that correspond to different types of materials (Fig. 1a). However, those remain faint color variations as all the bands are highly correlated. Principal Component Analysis (PCA) PCA is a tool for decorrelation and noise removal that maximizes color unit differences. On Marwth Vallis, PCA highlights the diversity of the surface on a spectacular way (Fig. 1b). Those images may be compared to the maps of mineral composition obtained by [11] from spectral analysis imaging spectrometer data. Part of the information in Fig. 1b is likely related to surface roughness because of the complex geometry of observation of the instrument. Furthermore, only an extremely clear atmosphere and low-compressed datasets allow obtaining such sharp results. Consequently, the meaning of the colors varies from image-to-image and is qualitative only. More quantitative and comparable results require spectral analysis, either to remove or to normalize atmospheric and geometric effects. Spectral analysis on HRSC data For this application the surface units to be distinguished have to possess linear independent color vectors in the five-dimensional color space of HRSC data. It has been shown by [2-5] that on the global scale, only four spectral endmembers representing red, iron oxide-rich material, dark, basaltic material, and ice plus a shade component containing effects of observation and illumination geometry, are sufficient to explain most of the colors present in HRSC color imagery. We assess this at our test areas contain a maximum of surface mineralogy diversity by applying refined methods to model (and remove) the shade contribution in order to test if a further surface component can be unambiguously detected in the HRSC color dataset. Error! Reference source not found.a shows that Spectral Mixing Analysis (SMA) performed by the Multiple-Endmember Linear Spectral Unmixing Model (MELSUM) [9] is able to separate bright red dust and bright outcrops known as hydrous materials. Root-Mean Square (RMS) model residuals mostly contain effects due to topography. Perspectives We will continue to investigate HRSC color data to map surface units and consider material diversity, atmospheric opacity, illumination and observation geometry, and calibration. Coming results will determine in which cases visual interpretation is sufficient, how spectral analysis can be performed to map surface units, and how take the advantage of imaging spectrometry. References [1] Neukum G. et al. (2004), ESA-SP 1240. [2] Combe J.-Ph. et al. (2007) 38th LPSC 2367. [3] Combe J.-Ph. et al. (2008) 39th LPSC 2381. [4] Wendt L. et al. (2008) 39th LPSC 1242. [5] McCord T. B. et al. (2007) JGR 112. [6] McCord T. B. et al. (2006) LPSC 1757. [7] Christensen P. et al. (2001), JGR 106 E10. [8] Glotch, T. D. et al. (2005), JGR, 110, E9. [9] Combe J.-Ph. et al. (2008), PSS 56. [10] Gendrin A. et al.(2005) Science 307. [11] Loizeau D. et al. (2007) JGR 112. [12] Mangold, N, et al. (2007), JGR, 112, E08S04. Acknowledgements First and third authors acknowledge NASA for contract with the Mars-Express mission. Second and Fourth authors acknowledge the German Space Agency (DLR Bonn) for their financial support of this study.
The Artist, the Color Copier, and Digital Imaging.
ERIC Educational Resources Information Center
Witte, Mary Stieglitz
The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…
Chemistry of the Konica Dry Color System
NASA Astrophysics Data System (ADS)
Suda, Yoshihiko; Ohbayashi, Keiji; Onodera, Kaoru
1991-08-01
While silver halide photosensitive materials offer superiority in image quality -- both in color and black-and-white -- they require chemical solutions for processing, and this can be a drawback. To overcome this, researchers turned to the thermal development of silver halide photographic materials, and met their first success with black-and-white images. Later, with the development of the Konica Dry Color System, color images were finally obtained from a completely dry thermal development system, without the use of water or chemical solutions. The dry color system is characterized by a novel chromogenic color image-forming technology and comprises four processes. (1) With the application of heat, a color developer precursor (CDP) decomposes to generate a p-phenylenediamine color developer (CD). (2) The CD then develops silver salts. (3) Oxidized CD then reacts with couplers to generate color image dyes. (4) Finally, the dyes diffuse from the system's photosensitive sheet to its image-receiving sheet. The authors have analyzed the kinetics of each of the system's four processes. In this paper, they report the kinetics of the system's first process, color developer (CD) generation.
Computerized image analysis for acetic acid induced intraepithelial lesions
NASA Astrophysics Data System (ADS)
Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.
2008-03-01
Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.
Color management in textile application
NASA Astrophysics Data System (ADS)
De Lucia, Maurizio; Vannucci, Massimiliano; Buonopane, Massimo; Fabroni, Cosimo; Fabrini, Francesco
2002-03-01
The aim of this research was to study a system of acquisition and processing of images capable of confronting colored wool with a reference specimen, in order to define the conformity using objective parameters. The first step of the research was to comprise and to analyze in depth the problem: there has been numerous implications of technical, physical, cultural, biological and also psychological character, that come down from the attempt of giving a quantitative appraisal to the color. In the scene of the national and international scientific and technological research, little has been made as regards measurement of color through digital processing of the images through linear CCD. The reason is fundamentally of technological nature: only during the last years we found the presence on the market of low cost equipment capable of acquiring and processing images with adequate performances and qualities. The job described has permitted to create a first prototype of system for the color measuring with use of CCD linear devices. -Hardware identification to carry out a series of tests and experiments in laboratory. -Verification of such device in a textile facility. -Statistics analysis of the collected data and of the employed models.
Chromatic Modulator for High Resolution CCD or APS Devices
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)
2003-01-01
A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.
Keene, Douglas R
2015-04-01
"Color blindness" is a variable trait, including individuals with just slight color vision deficiency to those rare individuals with a complete lack of color perception. Approximately 75% of those with color impairment are green diminished; most of those remaining are red diminished. Red-Green color impairment is sex linked with the vast majority being male. The deficiency results in reds and greens being perceived as shades of yellow; therefore red-green images presented to the public will not illustrate regions of distinction to these individuals. Tools are available to authors wishing to accommodate those with color vision deficiency; most notable are components in FIJI (an extension of ImageJ) and Adobe Photoshop. Using these tools, hues of magenta may be substituted for red in red-green images resulting in striking definition for both the color sighted and color impaired. Web-based tools may be used (importantly) by color challenged individuals to convert red-green images archived in web-accessible journal articles into two-color images, which they may then discern.
Opportunity Examines Cracks and Coatings on Mars Rocks
NASA Technical Reports Server (NTRS)
2005-01-01
This false-color panoramic image, taken on martian day, or sol, 561 (Aug. 22, 2005) by NASA's Opportunity rover, shows the nature of the outcrop rocks that the rover is encountering on its southward journey across the martian plains to 'Erebus Crater.' The rocks, similar in make-up to those encountered earlier in the mission, display a clear pattern of cracks as well as rind-like features (identifiable as a light shade of blue to olive in the image) coating the outcrop surface. Prominent in the image are two holes (one on the rock, one on the rind) drilled with the rover's rock abrasion tool to facilitate chemical analysis of the underlying material. The reddish color around the holes is from iron-rich dust produced during the grinding operation. The rind, nicknamed 'Lemon Rind,' and the underlying rock, nicknamed 'Strawberry,' have turned out to be similar in overall chemistry and texture. Science team members are working to understand the nature of the relationship between these kinds of rocks and rinds on the Meridiani plains. This false-color composite was generated from a combination of 750-, 530-, and 430-nanometer filter images taken by the Opportunity panoramic camera, an instrument that has acquired more than 36,000 color filter images to date of martian terrain at Meridiani Planum.Graphics-Printing Program For The HP Paintjet Printer
NASA Technical Reports Server (NTRS)
Atkins, Victor R.
1993-01-01
IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.
Earth and Moon as viewed from Mars
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-368, 22 May 2003
[figure removed for brevity, see original site] Globe diagram illustrates the Earth's orientation as viewed from Mars (North and South America were in view). Earth/Moon: This is the first image of Earth ever taken from another planet that actually shows our home as a planetary disk. Because Earth and the Moon are closer to the Sun than Mars, they exhibit phases, just as the Moon, Venus, and Mercury do when viewed from Earth. As seen from Mars by MGS on 8 May 2003 at 13:00 GMT (6:00 AM PDT), Earth and the Moon appeared in the evening sky. The MOC Earth/Moon image has been specially processed to allow both Earth (with an apparent magnitude of -2.5) and the much darker Moon (with an apparent magnitude of +0.9) to be visible together. The bright area at the top of the image of Earth is cloud cover over central and eastern North America. Below that, a darker area includes Central America and the Gulf of Mexico. The bright feature near the center-right of the crescent Earth consists of clouds over northern South America. The image also shows the Earth-facing hemisphere of the Moon, since the Moon was on the far side of Earth as viewed from Mars. The slightly lighter tone of the lower portion of the image of the Moon results from the large and conspicuous ray system associated with the crater Tycho.A note about the coloring process: The MGS MOC high resolution camera only takes grayscale (black-and-white) images. To 'colorize' the image, a Mariner 10 Earth/Moon image taken in 1973 was used to color the MOC Earth and Moon picture. The procedure used was as follows: the Mariner 10 image was converted from 24-bit color to 8-bit color using a JPEG to GIF conversion program. The 8-bit color image was converted to 8-bit grayscale and an associated lookup table mapping each gray value of the image to a red-green-blue color triplet (RGB). Each color triplet was root-sum-squared (RSS), and sorted in increasing RSS value. These sorted lists were brightness-to-color maps for the images. Each brightness-to-color map was then used to convert the 8-bit grayscale MOC image to an 8-bit color image. This 8-bit color image was then converted to a 24-bit color image. The color image was edited to return the background to black.A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
Image analysis for the detection of Barré
USDA-ARS?s Scientific Manuscript database
Barré is a major problem for the textile industry. Barré is detectable after fabric is dyed and the detection of barré can depend upon the color of the dyed fabric, lighting conditions, fabric pattern, and/or the color perception of the person viewing the fabric. The standard method for measuring ...
2015-02-25
This mosaic of Caloris basin is an enhanced-color composite overlain on a monochrome mosaic featured in a previous post. The color mosaic is made up of WAC images obtained when both the spacecraft and the Sun were overhead, conditions best for discerning variations in albedo, or brightness. The monochrome mosaic is made up of WAC and NAC images obtained at off-vertical Sun angles (i.e., high incidence angles) and with visible shadows so as to reveal clearly the topographic form of geologic features. The combination of the two datasets allows the correlation of geologic features with their color properties. In portions of the scene, color differences from image to image are apparent. Ongoing calibration efforts by the MESSENGER team strive to minimize these differences. Caloris basin has been flooded by lavas that appear orange in this mosaic. Post-flooding craters have excavated material from beneath the surface. The larger of these craters have exposed low-reflectance material (blue in this mosaic) from beneath the surface lavas, likely giving a glimpse of the original basin floor material. Analysis of these craters yields an estimate of the thickness of the volcanic layer: 2.5-3.5 km (1.6-2.2 mi.). http://photojournal.jpl.nasa.gov/catalog/PIA19216
Balaban, Murat O; Stewart, Kelsie; Fletcher, Graham C; Alçiçek, Zayde
2014-12-01
Ten gurnard and 10 snapper were stored on ice. One side always contacted the ice; the other side was always exposed to air. At different intervals for up to 12 d, the fish were placed in a light box, and the images of both sides were taken using polarized and nonpolarized illumination. Image analysis resulted in average L*, a*, and b* values of skin, and average L* values of the eyes. The skin L* value of gurnard changed significantly over time while that of snapper was substantially constant. The a* and b* values of both fish decreased over time. The L* values of eyes were significantly lower for polarized images, and significantly lower for the side of fish exposed to air only. This may be a concern in quality evaluation methods such as QIM. The difference of colors between the polarized and nonpolarized images was calculated to quantify the reflection off the surface of fish. For accurate measurement of surface color and eye color, use of polarized light is recommended. © 2014 Institute of Food Technologists®
The relationship between ambient illumination and psychological factors in viewing of display Images
NASA Astrophysics Data System (ADS)
Iwanami, Takuya; Kikuchi, Ayano; Kaneko, Takashi; Hirai, Keita; Yano, Natsumi; Nakaguchi, Toshiya; Tsumura, Norimichi; Yoshida, Yasuhiro; Miyake, Yoichi
2009-01-01
In this paper, we have clarified the relationship between ambient illumination and psychological factors in viewing of display images. Psychological factors were obtained by the factor analysis with the results of the semantic differential (SD) method. In the psychological experiments, subjects evaluated the impressions of displayed images with changing ambient illuminating conditions. The illumination conditions were controlled by a fluorescent ceiling light and a color LED illumination which was located behind the display. We experimented under two kinds of conditions. One was the experiment with changing brightness of the ambient illumination. The other was the experiment with changing the colors of the background illumination. In the results of the experiment, two factors "realistic sensation, dynamism" and "comfortable," were extracted under different brightness of the ambient illumination of the display surroundings. It was shown that the "comfortable" was improved by the brightness of display surroundings. On the other hand, when the illumination color of surroundings was changed, three factors "comfortable," "realistic sensation, dynamism" and "activity" were extracted. It was also shown that the value of "comfortable" and "realistic sensation, dynamism" increased when the display surroundings were illuminated by the average color of the image contents.
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
Characterizing pigments with hyperspectral imaging variable false-color composites
NASA Astrophysics Data System (ADS)
Hayem-Ghez, Anita; Ravaud, Elisabeth; Boust, Clotilde; Bastian, Gilles; Menu, Michel; Brodie-Linder, Nancy
2015-11-01
Hyperspectral imaging has been used for pigment characterization on paintings for the last 10 years. It is a noninvasive technique, which mixes the power of spectrophotometry and that of imaging technologies. We have access to a visible and near-infrared hyperspectral camera, ranging from 400 to 1000 nm in 80-160 spectral bands. In order to treat the large amount of data that this imaging technique generates, one can use statistical tools such as principal component analysis (PCA). To conduct the characterization of pigments, researchers mostly use PCA, convex geometry algorithms and the comparison of resulting clusters to database spectra with a specific tolerance (like the Spectral Angle Mapper tool on the dedicated software ENVI). Our approach originates from false-color photography and aims at providing a simple tool to identify pigments thanks to imaging spectroscopy. It can be considered as a quick first analysis to see the principal pigments of a painting, before using a more complete multivariate statistical tool. We study pigment spectra, for each kind of hue (blue, green, red and yellow) to identify the wavelength maximizing spectral differences. The case of red pigments is most interesting because our methodology can discriminate the red pigments very well—even red lakes, which are always difficult to identify. As for the yellow and blue categories, it represents a good progress of IRFC photography for pigment discrimination. We apply our methodology to study the pigments on a painting by Eustache Le Sueur, a French painter of the seventeenth century. We compare the results to other noninvasive analysis like X-ray fluorescence and optical microscopy. Finally, we draw conclusions about the advantages and limits of the variable false-color image method using hyperspectral imaging.
2015-10-08
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the floor of Melas Chasma. The dark blue region in this false color image is sand dunes. Orbit Number: 12061 Latitude: -12.2215 Longitude: 289.105 Instrument: VIS Captured: 2004-09-02 10:11 http://photojournal.jpl.nasa.gov/catalog/PIA19793
Calibration Image of Earth by Mars Color Imager
NASA Technical Reports Server (NTRS)
2005-01-01
Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils. The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results. The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles). This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data from the camera were being transmitted in real time to the Deep Space Network antennas in Goldstone, California.New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
Multi-pulse shadowgraphic RGB illumination and detection for flow tracking
NASA Astrophysics Data System (ADS)
Menser, Jan; Schneider, Florian; Dreier, Thomas; Kaiser, Sebastian A.
2018-06-01
This work demonstrates the application of a multi-color LED and a consumer color camera for visualizing phase boundaries in two-phase flows, in particular for particle tracking velocimetry. The LED emits a sequence of short light pulses, red, green, then blue (RGB), and through its color-filter array, the camera captures all three pulses on a single RGB frame. In a backlit configuration, liquid droplets appear as shadows in each color channel. Color reversal and color cross-talk correction yield a series of three frozen-flow images that can be used for further analysis, e.g., determining the droplet velocity by particle tracking. Three example flows are presented, solid particles suspended in water, the penetrating front of a gasoline direct-injection spray, and the liquid break-up region of an "air-assisted" nozzle. Because of the shadowgraphic arrangement, long path lengths through scattering media lower image contrast, while visualization of phase boundaries with high resolution is a strength of this method. Apart from a pulse-and-delay generator, the overall system cost is very low.
Quantifying the effect of colorization enhancement on mammogram images
NASA Astrophysics Data System (ADS)
Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia
2002-04-01
Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).
True color blood flow imaging using a high-speed laser photography system
NASA Astrophysics Data System (ADS)
Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi
2012-10-01
Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.
Achromatic synesthesias - a functional magnetic resonance imaging study.
Melero, H; Ríos-Lago, M; Peña-Melián, A; Álvarez-Linera, J
2014-09-01
Grapheme-color synesthetes experience consistent, automatic and idiosyncratic colors associated with specific letters and numbers. Frequently, these specific associations exhibit achromatic synesthetic qualities (e.g. white, black or gray). In this study, we have investigated for the first time the neural basis of achromatic synesthesias, their relationship to chromatic synesthesias and the achromatic congruency effect in order to understand not only synesthetic color but also other components of the synesthetic experience. To achieve this aim, functional magnetic resonance imaging experiments were performed in a group of associator grapheme-color synesthetes and matched controls who were stimulated with real chromatic and achromatic stimuli (Mondrians), and with letters and numbers that elicited different types of grapheme-color synesthesias (i.e. chromatic and achromatic inducers which elicited chromatic but also achromatic synesthesias, as well as congruent and incongruent ones). The information derived from the analysis of Mondrians and chromatic/achromatic synesthesias suggests that real and synesthetic colors/achromaticity do not fully share neural mechanisms. The whole-brain analysis of BOLD signals in response to the complete set of synesthetic inducers revealed that the functional peculiarities of the synesthetic brain are distributed, and reflect different components of the synesthetic experience: a perceptual component, an (attentional) feature binding component, and an emotional component. Additionally, the inclusion of achromatic experiences has provided new evidence in favor of the emotional binding theory, a line of interpretation which constitutes a bridge between grapheme-color synesthesia and other developmental modalities of the phenomenon. Copyright © 2014 Elsevier Inc. All rights reserved.
Lo, T Y; Sim, K S; Tso, C P; Nia, M E
2014-01-01
An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.
Luma-chroma space filter design for subpixel-based monochrome image downsampling.
Fang, Lu; Au, Oscar C; Cheung, Ngai-Man; Katsaggelos, Aggelos K; Li, Houqiang; Zou, Feng
2013-10-01
In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects
NASA Astrophysics Data System (ADS)
Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji
2017-03-01
High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
NASA Astrophysics Data System (ADS)
Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong
2016-12-01
To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.
A natural-color mapping for single-band night-time image based on FPGA
NASA Astrophysics Data System (ADS)
Wang, Yilun; Qian, Yunsheng
2018-01-01
A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
'Rosy Red' Soil in Phoenix's Scoop
NASA Technical Reports Server (NTRS)
2008-01-01
This image shows fine-grained material inside the Robotic Arm scoop as seen by the Robotic Arm Camera (RAC) aboard NASA's Phoenix Mars Lander on June 25, 2008, the 30th Martian day, or sol, of the mission. The image shows fine, fluffy, red soil particles collected in a sample called 'Rosy Red.' The sample was dug from the trench named 'Snow White' in the area called 'Wonderland.' Some of the Rosy Red sample was delivered to Phoenix's Optical Microscope and Wet Chemistry Laboratory for analysis. The RAC provides its own illumination, so the color seen in RAC images is color as seen on Earth, not color as it would appear on Mars. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.An Astronaut's View of Jewel-toned Lakes
NASA Technical Reports Server (NTRS)
2002-01-01
Astronauts onboard the International Space Station often observe small, otherwise unnoticed water bodies on the ground due to their unusual colors. For example, the Little Blue Run Dam and reservoir is located in western Pennsylvania, just south of the Ohio River. It is owned by Pennsylvania Power Company and used for industrial sludge impoundment. The materials suspended in the water give it a striking, turquoise color. Another lake with color linked commercial activity is Lake Gribben, just southeast of Palmer in Michigan's Upper Peninsula. Iron ore is extracted from New Richmond Mine visible just north of the lake. Images ISS004-E-10472 (Little Blue Run, April 4, 2002) and ISS004-E-10319 (Gribben, April 22, 2002) were provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth
NASA Astrophysics Data System (ADS)
Ma, Long; Zhao, Deping
2011-12-01
Spectral imaging technology have been used mostly in remote sensing, but have recently been extended to new area requiring high fidelity color reproductions like telemedicine, e-commerce, etc. These spectral imaging systems are important because they offer improved color reproduction quality not only for a standard observer under a particular illuminantion, but for any other individual exhibiting normal color vision capability under another illuminantion. A possibility for browsing of the archives is needed. In this paper, the authors present a new spectral image browsing architecture. The architecture for browsing is expressed as follow: (1) The spectral domain of the spectral image is reduced with the PCA transform. As a result of the PCA transform the eigenvectors and the eigenimages are obtained. (2) We quantize the eigenimages with the original bit depth of spectral image (e.g. if spectral image is originally 8bit, then quantize eigenimage to 8bit), and use 32bit floating numbers for the eigenvectors. (3) The first eigenimage is lossless compressed by JPEG-LS, the other eigenimages were lossy compressed by wavelet based SPIHT algorithm. For experimental evalution, the following measures were used. We used PSNR as the measurement for spectral accuracy. And for the evaluation of color reproducibility, ΔE was used.here standard D65 was used as a light source. To test the proposed method, we used FOREST and CORAL spectral image databases contrain 12 and 10 spectral images, respectively. The images were acquired in the range of 403-696nm. The size of the images were 128*128, the number of bands was 40 and the resolution was 8 bits per sample. Our experiments show the proposed compression method is suitable for browsing, i.e., for visual purpose.
Astronomy with the color blind
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Melrose, Justyn
2014-12-01
The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the field, although one should be cautious in assuming that such an image shows what the subject would "really look like" if a person could see it without the aid of a telescope. The details of how the eye processes light have a significant impact on how such images should be understood, and the step from perception to interpretation is even more problematic when the viewer is color blind. We report here on an approach to manipulating stacked tricolor images that, while abandoning attempts to portray the color distribution "realistically," do result in enabling those suffering from deuteranomaly (the most common form of color blindness) to perceive color distinctions they would otherwise not be able to see.
Accurate color synthesis of three-dimensional objects in an image
NASA Astrophysics Data System (ADS)
Xin, John H.; Shen, Hui-Liang
2004-05-01
Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.
NASA Astrophysics Data System (ADS)
Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron
2005-04-01
Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.
Hepatitis Diagnosis Using Facial Color Image
NASA Astrophysics Data System (ADS)
Liu, Mingjia; Guo, Zhenhua
Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.
Quantum red-green-blue image steganography
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Gheibi, Reza; Naseri, Mosayeb; Houshmand, Monireh
One of the most considering matters in the field of quantum information processing is quantum data hiding including quantum steganography and quantum watermarking. This field is an efficient tool for protecting any kind of digital data. In this paper, three quantum color images steganography algorithms are investigated based on Least Significant Bit (LSB). The first algorithm employs only one of the image’s channels to cover secret data. The second procedure is based on LSB XORing technique, and the last algorithm utilizes two channels to cover the color image for hiding secret quantum data. The performances of the proposed schemes are analyzed by using software simulations in MATLAB environment. The analysis of PSNR, BER and Histogram graphs indicate that the presented schemes exhibit acceptable performances and also theoretical analysis demonstrates that the networks complexity of the approaches scales squarely.
Barbosa, Daniel J C; Ramos, Jaime; Lima, Carlos S
2008-01-01
Capsule endoscopy is an important tool to diagnose tumor lesions in the small bowel. The capsule endoscopic images possess vital information expressed by color and texture. This paper presents an approach based in the textural analysis of the different color channels, using the wavelet transform to select the bands with the most significant texture information. A new image is then synthesized from the selected wavelet bands, trough the inverse wavelet transform. The features of each image are based on second-order textural information, and they are used in a classification scheme using a multilayer perceptron neural network. The proposed methodology has been applied in real data taken from capsule endoscopic exams and reached 98.7% sensibility and 96.6% specificity. These results support the feasibility of the proposed algorithm.
Color engineering in the age of digital convergence
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay W.
1998-09-01
Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.
Preparing Colorful Astronomical Images II
NASA Astrophysics Data System (ADS)
Levay, Z. G.; Frattare, L. M.
2002-12-01
We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †
Kiku, Daisuke; Okutomi, Masatoshi
2017-01-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
Spatial imaging in color and HDR: prometheus unchained
NASA Astrophysics Data System (ADS)
McCann, John J.
2013-03-01
The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.
USDA-ARS?s Scientific Manuscript database
Vegetative cover can be quantified quickly and consistently and often at lower cost with image analysis of color digital images than with visual assessments. Image-based mapping of vegetative cover for large-scale research and management decisions can now be considered with the accuracy of these met...
High-chroma visual cryptography using interference color of high-order retarder films
NASA Astrophysics Data System (ADS)
Sugawara, Shiori; Harada, Kenji; Sakai, Daisuke
2015-08-01
Visual cryptography can be used as a method of sharing a secret image through several encrypted images. Conventional visual cryptography can display only monochrome images. We have developed a high-chroma color visual encryption technique using the interference color of high-order retarder films. The encrypted films are composed of a polarizing film and retarder films. The retarder films exhibit interference color when they are sandwiched between two polarizing films. We propose a stacking technique for displaying high-chroma interference color images. A prototype visual cryptography device using high-chroma interference color is developed.
Shi, Peng; Zhong, Jing; Hong, Jinsheng; Huang, Rongfang; Wang, Kaijun; Chen, Yunbin
2016-08-26
Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.
Identification, definition and mapping of terrestrial ecosystems in interior Alaska
NASA Technical Reports Server (NTRS)
Anderson, J. H. (Principal Investigator)
1972-01-01
The author has identified the following significant results. A reconstituted color infrared image covering the western Seward Peninsula was used for identifying vegetation types by simple visual examination. The image was taken by ERTS-1 approximately 1120 hours on August 1, 1972. Seven major colors were identified. Four of these were matched with four units on existing vegetation maps: bright red - shrub thicket; light gray-red - upland tundra; medium gray-red - coastal wet tundra; gray - alpine barrens. In the bright red color, two phases, violet and orange, were recognized and tentatively ascribed to differences in species composition in the shrub thicket type. The three colors which had no map unit equivalents were interpreted as follows: pink - grassland tundra; dark gray-red - burn scars; light orange-red - senescent vegetation. It was concluded that the image provides a considerable amount of information regarding the distribution of vegetation types, even at so simple a leval of analysis. It was also concluded that sequential imagery of this type could provide useful information on vegetation fires and phenologic events.
Clustering document fragments using background color and texture information
NASA Astrophysics Data System (ADS)
Chanda, Sukalpa; Franke, Katrin; Pal, Umapada
2012-01-01
Forensic analysis of questioned documents sometimes can be extensively data intensive. A forensic expert might need to analyze a heap of document fragments and in such cases to ensure reliability he/she should focus only on relevant evidences hidden in those document fragments. Relevant document retrieval needs finding of similar document fragments. One notion of obtaining such similar documents could be by using document fragment's physical characteristics like color, texture, etc. In this article we propose an automatic scheme to retrieve similar document fragments based on visual appearance of document paper and texture. Multispectral color characteristics using biologically inspired color differentiation techniques are implemented here. This is done by projecting document color characteristics to Lab color space. Gabor filter-based texture analysis is used to identify document texture. It is desired that document fragments from same source will have similar color and texture. For clustering similar document fragments of our test dataset we use a Self Organizing Map (SOM) of dimension 5×5, where the document color and texture information are used as features. We obtained an encouraging accuracy of 97.17% from 1063 test images.
Fukuda, Hiroyuki; Numata, Kazushi; Nozaki, Akito; Kondo, Masaaki; Morimoto, Manabu; Maeda, Shin; Tanaka, Katsuaki; Ohto, Masao; Ito, Ryu; Ishibashi, Yoshiharu; Oshima, Noriyoshi; Ito, Ayao; Zhu, Hui; Wang, Zhi-Biao
2013-12-01
We evaluated the usefulness of color Doppler flow imaging to compensate for the inadequate resolution of the ultrasound (US) monitoring during high-intensity focused ultrasound (HIFU) for the treatment of hepatocellular carcinoma (HCC). US-guided HIFU ablation assisted using color Doppler flow imaging was performed in 11 patients with small HCC (<3 lesions, <3 cm in diameter). The HIFU system (Chongqing Haifu Tech) was used under US guidance. Color Doppler sonographic studies were performed using an HIFU 6150S US imaging unit system and a 2.7-MHz electronic convex probe. The color Doppler images were used because of the influence of multi-reflections and the emergence of hyperecho. In 1 of the 11 patients, multi-reflections were responsible for the poor visualization of the tumor. In 10 cases, the tumor was poorly visualized because of the emergence of a hyperecho. In these cases, the ability to identify the original tumor location on the monitor by referencing the color Doppler images of the portal vein and the hepatic vein was very useful. HIFU treatments were successfully performed in all 11 patients with the assistance of color Doppler imaging. Color Doppler imaging is useful for the treatment of HCC using HIFU, compensating for the occasionally poor visualization provided by B-mode conventional US imaging.
Calibration Image of Earth by Mars Color Imager
2005-08-22
Three days after the Mars Reconnaissance Orbiter Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon.
2017-02-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Gale Crater. Basaltic sands are dark blue in this type of false color combination. The Curiosity Rover is located in another portion of Gale Crater, far southwest of this image. Orbit Number: 51803 Latitude: -4.39948 Longitude: 138.116 Instrument: VIS Captured: 2013-08-18 09:04 http://photojournal.jpl.nasa.gov/catalog/PIA21312
Adaptive enhancement for nonuniform illumination images via nonlinear mapping
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Huang, Qian; Hu, Jing
2017-09-01
Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.
Object knowledge changes visual appearance: semantic effects on color afterimages.
Lupyan, Gary
2015-10-01
According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.
Small-Scale Spectral and Color Analysis of Ritchey Crater Impact Materials
NASA Astrophysics Data System (ADS)
Bray, Veronica; Chojnacki, Matthew; McEwen, Alfred; Heyd, Rodney
2014-11-01
Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) analysis of Ritchey crater on Mars has allowed identification of the minerals uplifted from depth within its central peak as well as the dominant spectral signature of the crater fill materials which surround it. However, the 18m/px resolution of CRISM prevents full analysis of the nature of small-scale dykes, mega breccia blocks and finer scale crater-fill units. We extend our existing CRISM-based compositional mapping of the Ritchey crater interior to sub-CRISM pixel scales with the use of High Resolution Imaging Science Experiment (HiRISE) Color Ratio Products (CRPs). These CRPs are then compared to CRISM images; correlation between color ratio and CRISM spectral signature for a large bedrock unit is defined and used to suggest similar composition for a smaller unit with the same color ratio. Megabreccia deposits, angular fragments of rock in excess of 1 meter in diameter within a finer grained matrix, are common at Ritchey. The dominant spectral signature from each megabreccia unit varies with location around Ritchey and appears to reflect the matrix composition (based on texture and albedo similarities to surrounding rocks) rather than clast composition. In cases where the breccia block size is large enough for CRISM analysis, many different mineral compositions are noted (low calcium pyroxene (LCP) olivine (OL), alteration products) depending on the location. All block compositions (as inferred from CRPs) are observed down to the limit of HiRISE resolution. We have found a variety of dyke compositions within our mapping area. Correlation between CRP color and CRISM spectra in this area suggest that large 10 m wide) dykes within LCP-bearing bedrock close to the crater center tend to have similar composition to the host rock. Smaller dykes running non-parallel to the larger dykes are inferred to be OL-rich suggesting multiple phases of dyke formation within the Ritchey crater and its bedrock.
Association of red coloration with senescence of sugar maple leaves in autumn
P.G. Schaberg; P.F. Murakami; M.R. Turner; H.K. Heitz; G.J. Hawley
2008-01-01
We evaluated the association of red coloration with senescence in sugar maple (Acer saccharum Marsh.) leaves by assessing differences in leaf retention strength and the progression of the abscission layer through the vascular bundle of green, yellow, and red leaves of 14 mature open-grown trees in October 2002. Computer image analysis confirmed...
. Consequently we produce two sorts of field. One is suitable for use by models, the global field. And the other color bar gif of the Alaska Region map Previous Alaska Region Maps NCEP MMAB Interactive Sea Ice Image Generation Animation Alaska Region Sea of Okhotsk and Sea of Japan - current figure concentration color bar
78 FR 18611 - Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-27
...] Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments AGENCY: Food and...: The Food and Drug Administration (FDA) and cosponsor International Color Consortium (ICC) are announcing the following public workshop entitled ``Summit on Color in Medical Imaging: An International...
NASA Astrophysics Data System (ADS)
Kanamori, Katsuhiro
2016-07-01
An endoscopic image processing technique for enhancing the appearance of microstructures on translucent mucosae is described. This technique employs two pairs of co- and cross-polarization images under two different linearly polarized lights, from which the averaged subtracted polarization image (AVSPI) is calculated. Experiments were then conducted using an acrylic phantom and excised porcine stomach tissue using a manual experimental setup with ring-type lighting, two rotating polarizers, and a color camera; better results were achieved with the proposed method than with conventional color intensity image processing. An objective evaluation method that uses texture analysis was developed and used to evaluate the enhanced microstructure images. This paper introduces two types of online, rigid-type, polarimetric endoscopic implementations using a polarized ring-shaped LED and a polarimetric camera. The first type uses a beam-splitter-type color polarimetric camera, and the second uses a single-chip monochrome polarimetric camera. Microstructures on the mucosa surface were enhanced robustly with these online endoscopes regardless of the difference in the extinction ratio of each device. These results show that polarimetric endoscopy using AVSPI is both effective and practical for hardware implementation.
Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei
This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.
Mortensen, Kim I; Tassone, Chiara; Ehrlich, Nicky; Andresen, Thomas L; Flyvbjerg, Henrik
2018-05-09
Nanosize lipid vesicles are used extensively at the interface between nanotechnology and biology, e.g., as containers for chemical reactions at minute concentrations and vehicles for targeted delivery of pharmaceuticals. Typically, vesicle samples are heterogeneous as regards vesicle size and structural properties. Consequently, vesicles must be characterized individually to ensure correct interpretation of experimental results. Here we do that using dual-color fluorescence labeling of vesicles-of their lipid bilayers and lumens, separately. A vesicle then images as two spots, one in each color channel. A simple image analysis determines the total intensity and width of each spot. These four data all depend on the vesicle radius in a simple manner for vesicles that are spherical, unilamellar, and optimal encapsulators of molecular cargo. This permits identification of such ideal vesicles. They in turn enable calibration of the dual-color fluorescence microscopy images they appear in. Since this calibration is not a separate experiment but an analysis of images of vesicles to be characterized, it eliminates the potential source of error that a separate calibration experiment would have been. Nonideal vesicles in the same images were characterized by how their four data violate the calibrated relationship established for ideal vesicles. In this way, our method yields size, shape, lamellarity, and encapsulation efficiency of each imaged vesicle. Applying this procedure to extruded samples of vesicles, we found that, contrary to common assumptions, only a fraction of vesicles are ideal.
What's color got to do with it? The influence of color on visual attention in different categories.
Frey, Hans-Peter; Honey, Christian; König, Peter
2008-10-23
Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.
Effective method for detecting regions of given colors and the features of the region surfaces
NASA Astrophysics Data System (ADS)
Gong, Yihong; Zhang, HongJiang
1994-03-01
Color can be used as a very important cue for image recognition. In industrial and commercial areas, color is widely used as a trademark or identifying feature in objects, such as packaged goods, advertising signs, etc. In image database systems, one may retrieve an image of interest by specifying prominent colors and their locations in the image (image retrieval by contents). These facts enable us to detect or identify a target object using colors. However, this task depends mainly on how effectively we can identify a color and detect regions of the given color under possibly non-uniform illumination conditions such as shade, highlight, and strong contrast. In this paper, we present an effective method to detect regions matching given colors, along with the features of the region surfaces. We adopt the HVC color coordinates in the method because of its ability of completely separating the luminant and chromatic components of colors. Three basis functions functionally serving as the low-pass, high-pass, and band-pass filters, respectively, are introduced.
Shear Wave Imaging of Breast Tissue by Color Doppler Shear Wave Elastography.
Yamakoshi, Yoshiki; Nakajima, Takahito; Kasahara, Toshihiro; Yamazaki, Mayuko; Koda, Ren; Sunaguchi, Naoki
2017-02-01
Shear wave elastography is a distinctive method to access the viscoelastic characteristic of the soft tissue that is difficult to obtain by other imaging modalities. This paper proposes a novel shear wave elastography [color Doppler shear wave imaging (CD SWI)] for breast tissue. Continuous shear wave is produced by a small lightweight actuator, which is attached to the tissue surface. Shear wave wavefront that propagates in tissue is reconstructed as a binary pattern that consists of zero and the maximum flow velocities on color flow image (CFI). Neither any modifications of the ultrasound color flow imaging instrument nor a high frame rate ultrasound imaging instrument is required to obtain the shear wave wavefront map. However, two conditions of shear wave displacement amplitude and shear wave frequency are needed to obtain the map. However, these conditions are not severe restrictions in breast imaging. This is because the minimum displacement amplitude is [Formula: see text] for an ultrasonic wave frequency of 12 MHz and the shear wave frequency is available from several frequencies suited for breast imaging. Fourier analysis along time axis suppresses clutter noise in CFI. A directional filter extracts shear wave, which propagates in the forward direction. Several maps, such as shear wave phase, velocity, and propagation maps, are reconstructed by CD SWI. The accuracy of shear wave velocity measurement is evaluated for homogeneous agar gel phantom by comparing with the acoustic radiation force impulse method. The experimental results for breast tissue are shown for a shear wave frequency of 296.6 Hz.
#TheDress: Categorical perception of an ambiguous color image.
Lafer-Sousa, Rosa; Conway, Bevil R
2017-10-01
We present a full analysis of data from our preliminary report (Lafer-Sousa, Hermann, & Conway, 2015) and test whether #TheDress image is multistable. A multistable image must give rise to more than one mutually exclusive percept, typically within single individuals. Clustering algorithms of color-matching data showed that the dress was seen categorically, as white/gold (W/G) or blue/black (B/K), with a blue/brown transition state. Multinomial regression predicted categorical labels. Consistent with our prior hypothesis, W/G observers inferred a cool illuminant, whereas B/K observers inferred a warm illuminant; moreover, subjects could use skin color alone to infer the illuminant. The data provide some, albeit weak, support for our hypothesis that day larks see the dress as W/G and night owls see it as B/K. About half of observers who were previously familiar with the image reported switching categories at least once. Switching probability increased with professional art experience. Priming with an image that disambiguated the dress as B/K biased reports toward B/K (priming with W/G had negligible impact); furthermore, knowledge of the dress's true colors and any prior exposure to the image shifted the population toward B/K. These results show that some people have switched their perception of the dress. Finally, consistent with a role of attention and local image statistics in determining how multistable images are seen, we found that observers tended to discount as achromatic the dress component that they did not attend to: B/K reporters focused on a blue region, whereas W/G reporters focused on a golden region.
#TheDress: Categorical perception of an ambiguous color image
Lafer-Sousa, Rosa; Conway, Bevil R.
2017-01-01
We present a full analysis of data from our preliminary report (Lafer-Sousa, Hermann, & Conway, 2015) and test whether #TheDress image is multistable. A multistable image must give rise to more than one mutually exclusive percept, typically within single individuals. Clustering algorithms of color-matching data showed that the dress was seen categorically, as white/gold (W/G) or blue/black (B/K), with a blue/brown transition state. Multinomial regression predicted categorical labels. Consistent with our prior hypothesis, W/G observers inferred a cool illuminant, whereas B/K observers inferred a warm illuminant; moreover, subjects could use skin color alone to infer the illuminant. The data provide some, albeit weak, support for our hypothesis that day larks see the dress as W/G and night owls see it as B/K. About half of observers who were previously familiar with the image reported switching categories at least once. Switching probability increased with professional art experience. Priming with an image that disambiguated the dress as B/K biased reports toward B/K (priming with W/G had negligible impact); furthermore, knowledge of the dress's true colors and any prior exposure to the image shifted the population toward B/K. These results show that some people have switched their perception of the dress. Finally, consistent with a role of attention and local image statistics in determining how multistable images are seen, we found that observers tended to discount as achromatic the dress component that they did not attend to: B/K reporters focused on a blue region, whereas W/G reporters focused on a golden region. PMID:29090319
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Image Decoding of Photonic Crystal Beads Array in the Microfluidic Chip for Multiplex Assays
Yuan, Junjie; Zhao, Xiangwei; Wang, Xiaoxia; Gu, Zhongze
2014-01-01
Along with the miniaturization and intellectualization of biomedical instruments, the increasing demand of health monitoring at anywhere and anytime elevates the need for the development of point of care testing (POCT). Photonic crystal beads (PCBs) as one kind of good encoded microcarriers can be integrated with microfluidic chips in order to realize cost-effective and high sensitive multiplex bioassays. However, there are difficulties in analyzing them towards automated analysis due to the characters of the PCBs and the unique detection manner. In this paper, we propose a strategy to take advantage of automated image processing for the color decoding of the PCBs array in the microfluidic chip for multiplex assays. By processing and alignment of two modal images of epi-fluorescence and epi-white light, every intact bead in the image is accurately extracted and decoded by PC colors, which stand for the target species. This method, which shows high robustness and accuracy under various configurations, eliminates the high hardware requirement of spectroscopy analysis and user-interaction software, and provides adequate supports for the general automated analysis of POCT based on PCBs array. PMID:25341876
Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierwirth, P.N.; Lee, T.J.; Burne, R.V.
1993-03-01
A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2017-03-01
Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi
2014-06-01
We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.
In situ spectroradiometric quantification of ERTS data
NASA Technical Reports Server (NTRS)
Yost, E. (Principal Investigator)
1972-01-01
The author has identified the following significant results. Additive color photographic analysis of ERTS-1 multispectral imagery indicates that the presence of soil moisture in playas (desert dry lakes) can be readily detected from space. Time sequence additive color presentations in which 600-700 nm bands taken at three successive 18-day cycles show that changes in soil moisture of playas with time can be detected as unique color signatures and can probably be quantitatively measured using photographic images of multispectral scanner data.
Color normalization for robust evaluation of microscopy images
NASA Astrophysics Data System (ADS)
Švihlík, Jan; Kybic, Jan; Habart, David
2015-09-01
This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.
Color constancy using bright-neutral pixels
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Luo, Yupin
2014-03-01
An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.
Selection of embryogenic sugarcane callus by image analysis.
Honda, H; Ito, T; Yamada, J; Hanai, T; Matsuoka, M; Kobayashi, T
1999-01-01
In the cultivation of plant calli on solid media, two kinds of calli such as compact and friable calli, which are a bright yellow and a whitish clump, respectively, are often obtained. Distinction of these calli is of much importance in the regeneration step. The image analysis system associated with a Charge Coupled Device (CCD) camera and microscopy were used to distinguish sugarcane calli. The original images from compact and friable calli were input to a computer via an image analysis board. At first, the brightnesses of trichromatic colors, red (R), green (G) and blue (B), of each pixels were extracted and the average brightness value for each color was calculated. From these values of the trichromatic colors, compact and friable calli could not be clearly distinguished. Next, the brightness of yellow, Br(Y), and white, Br(W), were defined using Br(R), Br(G) and Br(B), and the difference between Br(Y) and Br(W), Br(Y-W), which can be used to express the yellowish grade, was calculated. When Br(Y-W) was determined from all pixels of the original images of both calli, the compact calli were found to be clearly distinguished from the friable calli by the frequency distributions of Br(Y-W). Average brightness center value, Av(C(Y-W)), was calculated from the frequency distributions. It was found that the calli with less than 10 units of Av(C(Y-W)) was never regenerated and a proportional relationship between Av(C(Y-W)) and the regeneration frequency of the callus line was obtained.
Duval, Joseph S.
1995-01-01
This CD-ROM contains images generated from geophysical data, software for displaying and analyzing the images and software for displaying and examining profile data from aerial surveys flown as part of the National Uranium Resource Evaluation (NURE) Program of the U.S. Department of Energy. The images included are of gamma-ray data (uranium, thorium, and potassium channels), Bouguer gravity data, isostatic residual gravity data, aeromagnetic anomalies, topography, and topography with bathymetry. This publication contains image data for the conterminous United States and profile data for the conterminous United States within the area longitude 108 to 126 degrees W. and latitude 34 to 49 degrees N. The profile data include apparent surface concentrations of potassium, uranium, and thorium, the residual magnetic field, and the height above the ground. The images on this CD-ROM include graytone and color images of each data set, color shaded-relief images of the potential-field and topographic data, and color composite images of the gamma-ray data. The image display and analysis software can register images with geographic and geologic overlays. The profile display software permits the user to view the profiles as well as obtain data listings and export ASCII versions of data for selected flight lines.
Color transfer algorithm in medical images
NASA Astrophysics Data System (ADS)
Wang, Weihong; Xu, Yangfa
2007-12-01
In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and brightness between a group of images of a certain organ could be quite different. The quality of these images could bring great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper introduces the principle of this algorithm and uses it in the medical image processing.
Global Binary Continuity for Color Face Detection With Complex Background
NASA Astrophysics Data System (ADS)
Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.
2017-08-01
In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2009-01-01
Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue) the minimum, maximum, average, and standard deviation within the particle are tracked. These metrics can be used for autonomous analysis of color images from a microscope, video camera, or digital, still image. It can also automatically identify tumor morphology of stained images and has been used to detect stained cell phenomena (see figure).
Color correction optimization with hue regularization
NASA Astrophysics Data System (ADS)
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung
2016-01-01
The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.
Frequency division multiplexed multi-color fluorescence microscope system
NASA Astrophysics Data System (ADS)
Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan
2017-10-01
Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.
CFA-aware features for steganalysis of color images
NASA Astrophysics Data System (ADS)
Goljan, Miroslav; Fridrich, Jessica
2015-03-01
Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.
An improved quantum watermarking scheme using small-scale quantum circuits and color scrambling
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya; Xiao, Hong; Cao, Maojun
2017-05-01
In order to solve the problem of embedding the watermark into the quantum color image, in this paper, an improved scheme of using small-scale quantum circuits and color scrambling is proposed. Both color carrier image and color watermark image are represented using novel enhanced quantum representation. The image sizes for carrier and watermark are assumed to be 2^{n+1}× 2^{n+2} and 2n× 2n, respectively. At first, the color of pixels in watermark image is scrambled using the controlled rotation gates, and then, the scrambled watermark with 2^n× 2^n image size and 24-qubit gray scale is expanded to an image with 2^{n+1}× 2^{n+2} image size and 3-qubit gray scale. Finally, the expanded watermark image is embedded into the carrier image by the controlled-NOT gates. The extraction of watermark is the reverse process of embedding it into carrier image, which is achieved by applying operations in the reverse order. Simulation-based experimental results show that the proposed scheme is superior to other similar algorithms in terms of three items, visual quality, scrambling effect of watermark image, and noise resistibility.
Astronomy with the Color Blind
ERIC Educational Resources Information Center
Smith, Donald A.; Melrose, Justyn
2014-01-01
The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the…
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
Color TV: total variation methods for restoration of vector-valued images.
Blomgren, P; Chan, T F
1998-01-01
We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.
Image analysis and green tea color change kinetics during thin-layer drying.
Shahabi, Mohammad; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinpour, Soleiman
2014-09-01
This study was conducted to investigate the effect of air temperature and air flow velocity on kinetics of color parameter changes during hot-air drying of green tea, to obtain the best model for hot-air drying of green tea, to apply a computer vision system and to study the color changes during drying. In the proposed computer vision system system, at first RGB values of the images were converted into XYZ values and then to Commission International d'Eclairage L*a*b* color coordinates. The obtained color parameters of L*, a* and b* were calibrated with Hunter-Lab colorimeter. These values were also used for calculation of the color difference, chroma, hue angle and browning index. The values of L* and b* decreased, while the values of a* and color difference (ΔE*ab ) increased during hot-air drying. Drying data were fitted to three kinetic models. Zero, first-order and fractional conversion models were utilized to describe the color changes of green tea. The suitability of fitness was determined using the coefficient of determination (R (2)) and root-mean-square error. Results showed that the fraction conversion model had more acceptable fitness than the other two models in most of color parameters. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Color and Morphology of Lava Flows on Io
NASA Astrophysics Data System (ADS)
Piatek, Jennifer L.; McElfresh, Sarah B. Z.; Byrnes, Jeffrey M.; Hale, Amy Snyder; Crown, David A.
2000-12-01
Analyses of color and morphologic changes in Voyager images of lava flows on Io were conducted to extend previous flow studies to additional volcanoes in preparation for comparison to Galileo data. Blue and orange filter images of Atar, Daedalus, and Ra Paterae were examined to identify systematic downflow decreases in blue/orange reflectivity suggested in earlier studies as diagnostic of color changes in cooled sulfur flows. Analyses of the color and morphology of 21 lava flows were conducted at these volcanoes, with additional morphologic analysis of lava flows at Agni, Masaaw, Mbali, Shoshu, and Talos Paterae. A total of 66 lava flows of up to 245 km in length were mapped to identify morphologic changes consistent with the rheologic changes expected to occur in sulfur flows. Although downflow color changes are observed, the trends are not consistent, even at the same edifice. Individual flows exhibit a statistically significant increase in blue/orange ratio, decrease in blue/orange ratio, or a lack of progressive downflow color variation. Color changes have similar magnitudes downflow and across flow, and the color ranges observed are similar from volcano to volcano, suggesting that similar processes are controlling color ratios at these edifices. In addition, using flow widening and branching as an indicator of the low viscosity exhibited by sulfur cooling from high temperatures, these flows do not exhibit morphologic changes consistent with the systematic behavior expected from the simple progressive cooling of sulfur.
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1992-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.
Color preservation for tone reproduction and image enhancement
NASA Astrophysics Data System (ADS)
Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh
2014-01-01
Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.
NASA Astrophysics Data System (ADS)
Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao
2018-01-01
In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.
Color dithering methods for LEGO-like 3D printing
NASA Astrophysics Data System (ADS)
Sun, Pei-Li; Sie, Yuping
2015-01-01
Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.
Research of image retrieval technology based on color feature
NASA Astrophysics Data System (ADS)
Fu, Yanjun; Jiang, Guangyu; Chen, Fengying
2009-10-01
Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.
Reconstruction of color images via Haar wavelet based on digital micromirror device
NASA Astrophysics Data System (ADS)
Liu, Xingjiong; He, Weiji; Gu, Guohua
2015-10-01
A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.
NASA Astrophysics Data System (ADS)
Heller, Andrew Roland
The Fort Clark State Historic Site (32ME2) is a well known site on the upper Missouri River, North Dakota. The site was the location of two Euroamerican trading posts and a large Mandan-Arikara earthlodge village. In 2004, Dr. Kenneth L. Kvamme and Dr. Tommy Hailey surveyed the site using aerial color and thermal infrared imagery collected from a powered parachute. Individual images were stitched together into large image mosaics and registered to Wood's 1993 interpretive map of the site using Adobe Photoshop. The analysis of those image mosaics resulted in the identification of more than 1,500 archaeological features, including as many as 124 earthlodges.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi
2011-02-01
A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.
Dehazed Image Quality Assessment by Haze-Line Theory
NASA Astrophysics Data System (ADS)
Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai
2017-06-01
Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.
Jiang, Hao; Kaminska, Bozena
2018-04-24
To enable customized manufacturing of structural colors for commercial applications, up-scalable, low-cost, rapid, and versatile printing techniques are highly demanded. In this paper, we introduce a viable strategy for scaling up production of custom-input images by patterning individual structural colors on separate layers, which are then vertically stacked and recombined into full-color images. By applying this strategy on molded-ink-on-nanostructured-surface printing, we present an industry-applicable inkjet structural color printing technique termed multilayer molded-ink-on-nanostructured-surface (M-MIONS) printing, in which structural color pixels are molded on multiple layers of nanostructured surfaces. Transparent colorless titanium dioxide nanoparticles were inkjet-printed onto three separate transparent polymer substrates, and each substrate surface has one specific subwavelength grating pattern for molding the deposited nanoparticles into structural color pixels of red, green, or blue primary color. After index-matching lamination, the three layers were vertically stacked and bonded to display a color image. Each primary color can be printed into a range of different shades controlled through a half-tone process, and full colors were achieved by mixing primary colors from three layers. In our experiments, an image size as big as 10 cm by 10 cm was effortlessly achieved, and even larger images can potentially be printed on recombined grating surfaces. In one application example, the M-MIONS technique was used for printing customizable transparent color optical variable devices for protecting personalized security documents. In another example, a transparent diffractive color image printed with the M-MIONS technique was pasted onto a transparent panel for overlaying colorful information onto one's view of reality.
Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.
Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir
2016-08-01
Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.
Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki
2015-08-01
A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.
Securing Color Fidelity in 3D Architectural Heritage Scenarios.
Gaiani, Marco; Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio
2017-10-25
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy ('color characterization').
Correlation based efficient face recognition and color change detection
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.
2013-01-01
Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.
New Windows based Color Morphological Operators for Biomedical Image Processing
NASA Astrophysics Data System (ADS)
Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia
2016-04-01
Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.
Securing Color Fidelity in 3D Architectural Heritage Scenarios
Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio
2017-01-01
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’). PMID:29068359
NASA Astrophysics Data System (ADS)
Saleheen, Firdous; Badano, Aldo; Cheng, Wei-Chung
2017-03-01
The color reproducibility of two whole-slide imaging (WSI) devices was evaluated with biological tissue slides. Three tissue slides (human colon, skin, and kidney) were used to test a modern and a legacy WSI devices. The color truth of the tissue slides was obtained using a multispectral imaging system. The output WSI images were compared with the color truth to calculate the color difference for each pixel. A psychophysical experiment was also conducted to measure the perceptual color reproducibility (PCR) of the same slides with four subjects. The experiment results show that the mean color differences of the modern, legacy, and monochrome WSI devices are 10.94+/-4.19, 22.35+/-8.99, and 42.74+/-2.96 ▵E00, while their mean PCRs are 70.35+/-7.64%, 23.06+/-14.68%, and 0.91+/-1.01%, respectively.
Physics and psychophysics of color reproduction
NASA Astrophysics Data System (ADS)
Giorgianni, Edward J.
1991-08-01
The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.
Data Visualization and Animation Lab (DVAL) overview
NASA Technical Reports Server (NTRS)
Stacy, Kathy; Vonofenheim, Bill
1994-01-01
The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.
Ogawa, Shinpei; Kimata, Masafumi
2017-01-01
Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs—periodic crystals, metal-insulator-metal and mushroom-type PMAs—to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications. PMID:28772855
Ogawa, Shinpei; Kimata, Masafumi
2017-05-04
Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs-periodic crystals, metal-insulator-metal and mushroom-type PMAs-to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications.
NASA Astrophysics Data System (ADS)
Chen, Xuanze; Liu, Yujia; Yang, Xusan; Wang, Tingting; Alonas, Eric; Santangelo, Philip J.; Ren, Qiushi; Xi, Peng
2013-02-01
Fluorescent microscopy has become an essential tool to study biological molecules, pathways and events in living cells, tissues and animals. Meanwhile even the most advanced confocal microscopy can only yield optical resolution approaching Abbe diffraction limit of 200 nm. This is still larger than many subcellular structures, which are too small to be resolved in detail. These limitations have driven the development of super-resolution optical imaging methodologies over the past decade. In stimulated emission depletion (STED) microscopy, the excitation focus is overlapped by an intense doughnut-shaped spot to instantly de-excite markers from their fluorescent state to the ground state by stimulated emission. This effectively eliminates the periphery of the Point Spread Function (PSF), resulting in a narrower focal region, or super-resolution. Scanning a sharpened spot through the specimen renders images with sub-diffraction resolution. Multi-color STED imaging can present important structural and functional information for protein-protein interaction. In this work, we presented a two-color, synchronization-free STED microscopy with a Ti:Sapphire oscillator. The excitation wavelengths were 532nm and 635nm, respectively. With pump power of 4.6 W and sample irradiance of 310 mW, we achieved super-resolution as high as 71 nm. Human respiratory syncytial virus (hRSV) proteins were imaged with our two-color CW STED for co-localization analysis.
Devadhasan, Jasmine Pramila; Kim, Sanghyo
2015-02-09
CMOS sensors are becoming a powerful tool in the biological and chemical field. In this work, we introduce a new approach on quantifying various pH solutions with a CMOS image sensor. The CMOS image sensor based pH measurement produces high-accuracy analysis, making it a truly portable and user friendly system. pH indicator blended hydrogel matrix was fabricated as a thin film to the accurate color development. A distinct color change of red, green and blue (RGB) develops in the hydrogel film by applying various pH solutions (pH 1-14). The semi-quantitative pH evolution was acquired by visual read out. Further, CMOS image sensor absorbs the RGB color intensity of the film and hue value converted into digital numbers with the aid of an analog-to-digital converter (ADC) to determine the pH ranges of solutions. Chromaticity diagram and Euclidean distance represent the RGB color space and differentiation of pH ranges, respectively. This technique is applicable to sense the various toxic chemicals and chemical vapors by situ sensing. Ultimately, the entire approach can be integrated into smartphone and operable with the user friendly manner. Copyright © 2014 Elsevier B.V. All rights reserved.
A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera
NASA Astrophysics Data System (ADS)
Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin
2014-12-01
The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.
2015-01-01
Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377
NASA Astrophysics Data System (ADS)
Sakamoto, Takashi
2015-01-01
This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.
NASA Technical Reports Server (NTRS)
Gordon, H. R.; Evans, R. H.
1993-01-01
In a recent paper Eckstein and Simpson describe what they believe to be serious difficulties and/or errors with the CZCS (Coastal Zone Color Scanner) processing algorithms based on their analysis of seven images. Here we point out that portions of their analysis, particularly those dealing with multiple scattered Rayleigh radiance, are incorrect. We also argue that other problems they discuss have already been addressed in the literature. Finally, we suggest that many apparent artifacts in CZCS-derived pigment fields are likely to be due to inadequacies in the sensor band set or to poor radiometric stability, both of which will be remedied with the next generation of ocean color sensors.
2017-07-13
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 59750 Latitude: -10.5452 Longitude: 290.307 Instrument: VIS Captured: 2015-06-03 12:33 https://photojournal.jpl.nasa.gov/catalog/PIA21705
2015-08-21
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 10289 Latitude: -9.9472 Longitude: 285.933 Instrument: VIS Captured: 2004-04-09 12:43 http://photojournal.jpl.nasa.gov/catalog/PIA19756
NASA Astrophysics Data System (ADS)
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.