Science.gov

Sample records for 8-bit color image

  1. 8-Bit Gray Scale Images of Fingerprint Image Groups

    National Institute of Standards and Technology Data Gateway

    NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (PC database for purchase)   The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  2. DICOM part 14: GSDF-calibrated medical grade monitor vs a DICOM part 14: GSDF-calibrated “commercial off-the-shelf” (COTS) monitor for viewing 8-bit dental images

    PubMed Central

    McNulty, J P

    2015-01-01

    Objectives: To investigate whether there is any difference in the presented image quality between a medical grade monitor and a “commercial off-the- shelf” (COTS) monitor when displaying an 8-bit dental image. Methods: The digital imaging and communications in medicine (DICOM) part 14: greyscale standard display function (GSDF) was verified for both monitors. A visual grading characteristics (VGC) curve was constructed to measure the difference in image quality between the two monitors by comparing radiological structures displayed on each monitor with a DICOM part 14: GSDF-calibrated laptop monitor as reference. Results: All of the monitors conformed to within the American Association of Physicists in Medicine Task Group 18 10% tolerance levels for the assessment of the DICOM part 14: GSDF. There was no difference in the preferred perceived visual sensation for the displayed image between the two tested monitors with the area under the VGC curve = 0.53 and 95% confidence interval = 0.47–0.59. Conclusions: A DICOM part 14: GSDF COTS monitor is capable of displaying an image quality that is equally preferred to a DICOM part 14: GSDF medical grade monitor for an 8-bit image file. PMID:25421807

  3. NSC 800, 8-bit CMOS microprocessor

    NASA Technical Reports Server (NTRS)

    Suszko, S. F.

    1984-01-01

    The NSC 800 is an 8-bit CMOS microprocessor manufactured by National Semiconductor Corp., Santa Clara, California. The 8-bit microprocessor chip with 40-pad pin-terminals has eight address buffers (A8-A15), eight data address -- I/O buffers (AD(sub 0)-AD(sub 7)), six interrupt controls and sixteen timing controls with a chip clock generator and an 8-bit dynamic RAM refresh circuit. The 22 internal registers have the capability of addressing 64K bytes of memory and 256 I/O devices. The chip is fabricated on N-type (100) silicon using self-aligned polysilicon gates and local oxidation process technology. The chip interconnect consists of four levels: Aluminum, Polysi 2, Polysi 1, and P(+) and N(+) diffusions. The four levels, except for contact interface, are isolated by interlevel oxide. The chip is packaged in a 40-pin dual-in-line (DIP), side brazed, hermetically sealed, ceramic package with a metal lid. The operating voltage for the device is 5 V. It is available in three operating temperature ranges: 0 to +70 C, -40 to +85 C, and -55 to +125 C. Two devices were submitted for product evaluation by F. Stott, MTS, JPL Microprocessor Specialist. The devices were pencil-marked and photographed for identification.

  4. Visual color image processing

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Schaefer, Gerald

    1999-12-01

    In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.

  5. Digital color representation

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1992-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.

  6. Color harmonization for images

    NASA Astrophysics Data System (ADS)

    Tang, Zhen; Miao, Zhenjiang; Wan, Yanli; Wang, Zhifei

    2011-04-01

    Color harmonization is an artistic technique to adjust a set of colors in order to enhance their visual harmony so that they are aesthetically pleasing in terms of human visual perception. We present a new color harmonization method that treats the harmonization as a function optimization. For a given image, we derive a cost function based on the observation that pixels in a small window that have similar unharmonic hues should be harmonized with similar harmonic hues. By minimizing the cost function, we get a harmonized image in which the spatial coherence is preserved. A new matching function is proposed to select the best matching harmonic schemes, and a new component-based preharmonization strategy is proposed to preserve the hue distribution of the harmonized images. Our approach overcomes several shortcomings of the existing color harmonization methods. We test our algorithm with a variety of images to demonstrate the effectiveness of our approach.

  7. Color Doppler flow imaging.

    PubMed

    Foley, W D; Erickson, S J

    1991-01-01

    The performance requirements and operational parameters of a color Doppler system are outlined. The ability of an operator to recognize normal and abnormal variations in physiologic flow and artifacts caused by noise and aliasing is emphasized. The use of color Doppler flow imaging is described for the vessels of the neck and extremities, upper abdomen and abdominal transplants, obstetrics and gynecology, dialysis fistulas, and testicular and penile flow imaging. PMID:1898567

  8. Color image segmentation

    NASA Astrophysics Data System (ADS)

    McCrae, Kimberley A.; Ruck, Dennis W.; Rogers, Steven K.; Oxley, Mark E.

    1994-03-01

    The most difficult stage of automated target recognition is segmentation. Current segmentation problems include faces and tactical targets; previous efforts to segment these objects have used intensity and motion cues. This paper develops a color preprocessing scheme to be used with the other segmentation techniques. A neural network is trained to identify the color of a desired object, eliminating all but that color from the scene. Gabor correlations and 2D wavelet transformations will be performed on stationary images; and 3D wavelet transforms on multispectral data will incorporate color and motion detection into the machine visual system. The paper will demonstrate that color and motion cues can enhance a computer segmentation system. Results from segmenting faces both from the AFIT data base and from video taped television are presented; results from tactical targets such as tanks and airplanes are also given. Color preprocessing is shown to greatly improve the segmentation in most cases.

  9. An innovative lossless compression method for discrete-color images.

    PubMed

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average. PMID:25330487

  10. A VLSI single chip 8-bit finite field multiplier

    NASA Technical Reports Server (NTRS)

    Deutsch, L. J.; Shao, H. M.; Hsu, I. S.; Truong, T. K.

    1985-01-01

    A Very Large Scale Integration (VLSI) architecture and layout for an 8-bit finite field multiplier is described. The algorithm used in this design was developed by Massey and Omura. A normal basis representation of finite field elements is used to reduce the multiplication complexity. It is shown that a drastic improvement was achieved in this design. This multiplier will be used intensively in the implementation of an 8-bit Reed-Solomon decoder and in many other related projects.

  11. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  12. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  13. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the

  14. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  15. Color image segmentation considering human sensitivity for color pattern variations

    NASA Astrophysics Data System (ADS)

    Yoon, Kuk-Jin; Kweon, In-So

    2001-10-01

    Color image segmentation plays an important role in the computer vision and image processing area. In this paper, we propose a novel color image segmentation algorithm in consideration of human visual sensitivity for color pattern variations by generalizing K-means clustering. Human visual system has different color perception sensitivity according to the spatial color pattern variation. To reflect this effect, we define the CCM (Color Complexity Measure) by calculating the absolute deviation with Gaussian weighting within the local mask and assign weight value to each color vector using the CCM values.

  16. Image subregion querying using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2002-01-01

    A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.

  17. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  18. Temperature-compensated 8-bit column driver for AMLCD

    NASA Astrophysics Data System (ADS)

    Dingwall, Andrew G. F.; Lin, Mark L.

    1995-06-01

    An all-digital, 5 V input, 50 Mhz bandwidth, 10-bit resolution, 128- column, AMLCD column driver IC has been designed and tested. The 10-bit design can enhance display definition over 6-bit nd 8-bit column drivers. Precision is realized with on-chip, switched-capacitor DACs plus transparently auto-offset-calibrated, opamp outputs. Increased resolution permits multiple 10-bit digital gamma remappings in EPROMs over temperature. Driver IC features include externally programmable number of output column, bi-directional digital data shifting, user- defined row/column/pixel/frame inversion, power management, timing control for daisy-chained column drivers, and digital bit inversion. The architecture uses fewer reference power supplies.

  19. Transfer color to night vision images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Jing, Zhongliang; Liu, Gang; Li, Zhenhua

    2005-08-01

    Natural color appearance is the key problem of color night vision field. In this paper, the color mood of daytime color image is transferred to the monochromic night vision image. This method gives the night image a natural color appearance. For each pixel in the night vision image, the best matching pixel in the color image is found based on texture similarity measure. Entropy, energy, contrast, homogeneity, and correlation features based on co-occurrence matrix are combined as texture similarity measure to find the corresponding pixels between the two images. We use a genetic algorithm (GA) to find the optimistic weighting factors assigned to the five different features. GA is also employed in searching the matching pixels to make the color transfer algorithm faster. When the best matching pixel in the color image is found, the chromaticity values are transferred to the corresponding pixel of the night vision image. The experiment results demonstrate the efficiency of this natural color transfer technique.

  20. Snapshot colored compressive spectral imager.

    PubMed

    Correa, Claudia V; Arguello, Henry; Arce, Gonzalo R

    2015-10-01

    Traditional spectral imaging approaches require sensing all the voxels of a scene. Colored mosaic FPA detector-based architectures can acquire sets of the scene's spectral components, but the number of spectral planes depends directly on the number of available filters used on the FPA, which leads to reduced spatiospectral resolutions. Instead of sensing all the voxels of the scene, compressive spectral imaging (CSI) captures coded and dispersed projections of the spatiospectral source. This approach mitigates the resolution issues by exploiting optical phenomena in lenses and other elements, which, in turn, compromise the portability of the devices. This paper presents a compact snapshot colored compressive spectral imager (SCCSI) that exploits the benefits of the colored mosaic FPA detectors and the compression capabilities of CSI sensing techniques. The proposed optical architecture has no moving parts and can capture the spatiospectral information of a scene in a single snapshot by using a dispersive element and a color-patterned detector. The optical and the mathematical models of SCCSI are presented along with a testbed implementation of the system. Simulations and real experiments show the accuracy of SCCSI and compare the reconstructions with those of similar CSI optical architectures, such as the CASSI and SSCSI systems, resulting in improvements of up to 6 dB and 1 dB of PSNR, respectively. PMID:26479928

  1. Color (RGB) imaging laser radar

    NASA Astrophysics Data System (ADS)

    Ferri De Collibus, M.; Bartolini, L.; Fornetti, G.; Francucci, M.; Guarneri, M.; Nuvoli, M.; Paglia, E.; Ricci, R.

    2008-03-01

    We present a new color (RGB) imaging 3D laser scanner prototype recently developed in ENEA, Italy). The sensor is based on AM range finding technique and uses three distinct beams (650nm, 532nm and 450nm respectively) in monostatic configuration. During a scan the laser beams are simultaneously swept over the target, yielding range and three separated channels (R, G and B) of reflectance information for each sampled point. This information, organized in range and reflectance images, is then elaborated to produce very high definition color pictures and faithful, natively colored 3D models. Notable characteristics of the system are the absence of shadows in the acquired reflectance images - due to the system's monostatic setup and intrinsic self-illumination capability - and high noise rejection, achieved by using a narrow field of view and interferential filters. The system is also very accurate in range determination (accuracy better than 10 -4) at distances up to several meters. These unprecedented features make the system particularly suited to applications in the domain of cultural heritage preservation, where it could be used by conservators for examining in detail the status of degradation of frescoed walls, monuments and paintings, even at several meters of distance and in hardly accessible locations. After providing some theoretical background, we describe the general architecture and operation modes of the color 3D laser scanner, by reporting and discussing first experimental results and comparing high-definition color images produced by the instrument with photographs of the same subjects taken with a Nikon D70 digital camera.

  2. Color image simulation for underwater optics.

    PubMed

    Boffety, Matthieu; Galland, Frédéric; Allais, Anne-Gaëlle

    2012-08-10

    Underwater optical image simulation is a valuable tool for oceanic science, especially for the characterization of image processing techniques such as color restoration. In this context, simulating images with a correct color rendering is crucial. This paper presents an extension of existing image simulation models to RGB imaging. The influence of the spectral discretization of the model parameters on the color rendering of the simulated images is studied. It is especially shown that, if only RGB data of the scene chosen for simulations are available, a spectral reconstruction step prior to the simulations improves the image color rendering. PMID:22885575

  3. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  4. Color space selection for JPEG image compression

    NASA Astrophysics Data System (ADS)

    Moroney, Nathan; Fairchild, Mark D.

    1995-10-01

    The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, XYZ, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psychophysical techniques were used to assess the results. The final results indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.

  5. High-performance VGA-resolution digital color CMOS imager

    NASA Astrophysics Data System (ADS)

    Agwani, Suhail; Domer, Steve; Rubacha, Ray; Stanley, Scott

    1999-04-01

    This paper discusses the performance of a new VGA resolution color CMOS imager developed by Motorola on a 0.5micrometers /3.3V CMOS process. This fully integrated, high performance imager has on chip timing, control, and analog signal processing chain for digital imaging applications. The picture elements are based on 7.8micrometers active CMOS pixels that use pinned photodiodes for higher quantum efficiency and low noise performance. The image processing engine includes a bank of programmable gain amplifiers, line rate clamping for dark offset removal, real time auto white balancing, per column gain and offset calibration, and a 10 bit pipelined RSD analog to digital converter with a programmable input range. Post ADC signal processing includes features such as bad pixel replacement based on user defined thresholds levels, 10 to 8 bit companding and 5 tap FIR filtering. The sensor can be programmed via a standard I2C interface that runs on 3.3V clocks. Programmable features include variable frame rates using a constant frequency master clock, electronic exposure control, continuous or single frame capture, progressive or interlace scanning modes. Each pixel is individually addressable allowing region of interest imaging and image subsampling. The sensor operates with master clock frequencies of up to 13.5MHz resulting in 30FPS. A total programmable gain of 27dB is available. The sensor power dissipation is 400mW at full speed of operation. The low noise design yields a measured 'system on a chip' dynamic range of 50dB thus giving over 8 true bits of resolution. Extremely high conversion gain result in an excellent peak sensitivity of 22V/(mu) J/cm2 or 3.3V/lux-sec. This monolithic image capture and processing engine represent a compete imaging solution making it a true 'camera on a chip'. Yet in its operation it remains extremely easy to use requiring only one clock and a 3.3V power supply. Given the available features and performance levels, this sensor will be

  6. Image color reduction method for color-defective observers using a color palette composed of 20 particular colors

    NASA Astrophysics Data System (ADS)

    Sakamoto, Takashi

    2015-01-01

    This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.

  7. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  8. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  9. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.

  10. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  11. Digital image colorization based on distance transformation

    NASA Astrophysics Data System (ADS)

    Lagodzinski, Przemyslaw; Smolka, Bogdan

    2008-01-01

    Colorization is a term introduced by W. Markle1 to describe a computerized process for adding color to black and white pictures, movies or TV programs. The task involves replacing a scalar value stored at each pixel of the gray scale image by a vector in a three dimensional color space with luminance, saturation and hue or simply RGB. Since different colors may carry the same luminance value but vary in hue and/or saturation, the problem of colorization has no inherently "correct" solution. Due to these ambiguities, human interaction usually plays a large role. In this paper we present a novel colorization method that takes advantage of the morphological distance transformation, changes of neighboring pixel intensities and gradients to propagate the color within the gray scale image. The proposed method frees the user of segmenting the image, as color is provided simply by scribbles which are next automatically propagated within the image. The effectiveness of the algorithm allows the user to work interactively and to obtain the desired results promptly after providing the color scribbles. In the paper we show that the proposed method allows for high quality colorization results for still images.

  12. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score. PMID:26513783

  13. Variational exemplar-based image colorization.

    PubMed

    Bugeau, Aurélie; Ta, Vinh-Thong; Papadakis, Nicolas

    2014-01-01

    In this paper, we address the problem of recovering a color image from a grayscale one. The input color data comes from a source image considered as a reference image. Reconstructing the missing color of a grayscale pixel is here viewed as the problem of automatically selecting the best color among a set of color candidates while simultaneously ensuring the local spatial coherency of the reconstructed color information. To solve this problem, we propose a variational approach where a specific energy is designed to model the color selection and the spatial constraint problems simultaneously. The contributions of this paper are twofold. First, we introduce a variational formulation modeling the color selection problem under spatial constraints and propose a minimization scheme, which computes a local minima of the defined nonconvex energy. Second, we combine different patch-based features and distances in order to construct a consistent set of possible color candidates. This set is used as input data and our energy minimization automatically selectsthe best color to transfer for each pixel of the grayscale image. Finally, the experiments illustrate the potentiality of our simple methodology and show that our results are very competitive with respect to the state-of-the-art methods. PMID:24235307

  14. Image-based color ink diffusion rendering.

    PubMed

    Wang, Chung-Ming; Wang, Ren-Jie

    2007-01-01

    This paper proposes an image-based painterly rendering algorithm for automatically synthesizing an image with color ink diffusion. We suggest a mathematical model with a physical base to simulate the phenomenon of color colloidal ink diffusing into absorbent paper. Our algorithm contains three main parts: a feature extraction phase, a Kubelka-Munk (KM) color mixing phase, and a color ink diffusion synthesis phase. In the feature extraction phase, the information of the reference image is simplified by luminance division and color segmentation. In the color mixing phase, the KM theory is employed to approximate the result when one pigment is painted upon another pigment layer. Then, in the color ink diffusion synthesis phase, the physically-based model that we propose is employed to simulate the result of color ink diffusion in absorbent paper using a texture synthesis technique. Our image-based ink diffusing rendering (IBCIDR) algorithm eliminates the drawback of conventional Chinese ink simulations, which are limited to the black ink domain, and our approach demonstrates that, without using any strokes, a color image can be automatically converted to the diffused ink style with a visually pleasing appearance. PMID:17218741

  15. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  16. Improving dermoscopy image classification using color constancy.

    PubMed

    Barata, Catarina; Celebi, M Emre; Marques, Jorge S

    2015-05-01

    Robustness is one of the most important characteristics of computer-aided diagnosis systems designed for dermoscopy images. However, it is difficult to ensure this characteristic if the systems operate with multisource images acquired under different setups. Changes in the illumination and acquisition devices alter the color of images and often reduce the performance of the systems. Thus, it is important to normalize the colors of dermoscopy images before training and testing any system. In this paper, we investigate four color constancy algorithms: Gray World, max-RGB, Shades of Gray, and General Gray World. Our results show that color constancy improves the classification of multisource images, increasing the sensitivity of a bag-of-features system from 71.0% to 79.7% and the specificity from 55.2% to 76% using only 1-D RGB histograms as features. PMID:25073179

  17. Statistical pressure snakes based on color images.

    SciTech Connect

    Schaub, Hanspeter

    2004-05-01

    The traditional mono-color statistical pressure snake was modified to function on a color image with target errors defined in HSV color space. Large variations in target lighting and shading are permitted if the target color is only specified in terms of hue. This method works well with custom targets where the target is surrounded by a color of a very different hue. A significant robustness increase is achieved in the computer vision capability to track a specific target in an unstructured, outdoor environment. By specifying the target color to contain hue, saturation and intensity values, it is possible to establish a reasonably robust method to track general image features of a single color. This method is convenient to allow the operator to select arbitrary targets, or sections of a target, which have a common color. Further, a modification to the standard pixel averaging routine is introduced which allows the target to be specified not only in terms of a single color, but also using a list of colors. These algorithms were tested and verified by using a web camera attached to a personal computer.

  18. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  19. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  20. Color standardization in whole slide imaging using a color calibration slide

    PubMed Central

    Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako

    2014-01-01

    Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739

  1. How Phoenix Creates Color Images (Animation)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This simple animation shows how a color image is made from images taken by Phoenix.

    The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists.

    By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  2. Beyond Color Difference: Residual Interpolation for Color Image Demosaicking.

    PubMed

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2016-03-01

    In this paper, we propose residual interpolation (RI) as an alternative to color difference interpolation, which is a widely accepted technique for color image demosaicking. Our proposed RI performs the interpolation in a residual domain, where the residuals are differences between observed and tentatively estimated pixel values. Our hypothesis for the RI is that if image interpolation is performed in a domain with a smaller Laplacian energy, its accuracy is improved. Based on the hypothesis, we estimate the tentative pixel values to minimize the Laplacian energy of the residuals. We incorporate the RI into the gradient-based threshold free algorithm, which is one of the state-of-the-art Bayer demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm using the RI surpasses the state-of-the-art algorithms for the Kodak, the IMAX, and the beyond Kodak data sets. PMID:26780794

  3. Color image attribute and quality measurements

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Panetta, Karen; Agaian, Sos

    2014-05-01

    Color image quality measures have been used for many computer vision tasks. In practical applications, the no-reference (NR) measures are desirable because reference images are not always accessible. However, only limited success has been achieved. Most existing NR quality assessments require that the types of image distortion is known a-priori. In this paper, three NR color image attributes: colorfulness, sharpness and contrast are quantified by new metrics. Using these metrics, a new Color Quality Measure (CQM), which is based on the linear combination of these three color image attributes, is presented. We evaluated the performance of several state-of-the-art no-reference measures for comparison purposes. Experimental results demonstrate the CQM correlates well with evaluations obtained from human observers and it operates in real time. The results also show that the presented CQM outperforms previous works with respect to ranking image quality among images containing the same or different contents. Finally, the performance of CQM is independent of distortion types, which is demonstrated in the experimental results.

  4. Color image fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Toet, Alexander

    2003-09-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.

  5. Color structured light imaging of skin

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin; Reichenberg, Jason; Sacks, Michael; Tunnell, James W.

    2016-05-01

    We illustrate wide-field imaging of skin using a structured light (SL) approach that highlights the contrast from superficial tissue scattering. Setting the spatial frequency of the SL in a regime that limits the penetration depth effectively gates the image for photons that originate from the skin surface. Further, rendering the SL images in a color format provides an intuitive format for viewing skin pathologies. We demonstrate this approach in skin pathologies using a custom-built handheld SL imaging system.

  6. High capacity image barcodes using color separability

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Oztan, Basak; Sharma, Gaurav

    2011-01-01

    Two-dimensional barcodes are widely used for encoding data in printed documents. In a number of applications, the visual appearance of the barcode constitutes a fundamental restriction. In this paper, we propose high capacity color image barcodes that encode data in an image while preserving its basic appearance. Our method aims at high embedding rates and sacrifices image fidelity in favor of embedding robustness in regions where these two goals conflict with each other. The method operates by utilizing cyan, magenta, and yellow printing channels with elongated dots whose orientations are modulated in order to encode the data. At the receiver, by using the complementary sensor channels to estimate the colorant channels, data is extracted in each individual colorant channel. In order to recover from errors introduced in the channel, error correction coding is employed. Our simulation and experimental results indicate that the proposed method can achieve high encoding rates while preserving the appearance of the base image.

  7. Paper roughness and the color gamut of color laser images

    NASA Astrophysics Data System (ADS)

    Arney, J. S.; Spampata, Michelle; Farnand, Susan; Oswald, Tom; Chauvin, Jim

    2007-01-01

    Common experience indicates the quality of a printed image depends on the choice of the paper used in the printing process. In the current report, we have used a recently developed device called a micro-goniophotometer to examine toner on a variety of substrates fused to varying degrees. The results indicate that the relationship between the printed color gamut and the topography of the substrate paper is a simple one for a color electrophotographic process. If the toner is fused completely to an equilibrium state with the substrate paper, then the toner conforms to the overall topographic features of the substrate. For rougher papers, the steeper topographic features are smoothed out by the toner. The maximum achievable color gamut is limited by the topographic smoothness of the resulting fused surface. Of course, achieving a fully fused surface at a competitive printing rate with a minimum of power consumption is not always feasible. However, the only significant factor found to limit the maximum state of fusing and the ultimate achievable color gamut is the smoothness of the paper.

  8. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, Brian A.

    1987-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficiently by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  9. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  10. Color constancy and the natural image

    NASA Technical Reports Server (NTRS)

    Wandall, Brian A.

    1989-01-01

    Color vision is useful only if it is possible to identify an object's color across many viewing contexts. Here, consideration is given to recent results on how to estimate the surface reflectance function of an object from image data, despite (1) uncertainty in the spectral power distribution of the ambient lighting, and (2) uncertainty about what other surfaces will be in the field of view.

  11. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  12. Color Image Magnification: Geometrical Pattern Classification Approach

    NASA Astrophysics Data System (ADS)

    Yong, Tien Fui; Choo, Wou Onn; Meian Kok, Hui

    In an era where technology keeps advancing, it is vital that high-resolution images are available to produce high-quality displayed images and fine-quality prints. The problem is that it is quite impossible to produce high-resolution images with acceptable clarity even with the latest digital cameras. Therefore, there is a need to enlarge the original images using an effective and efficient algorithm. The main contribution of this paper is to produce an enlarge color image with high visual quality, up to four times the original size of 100x100 pixels image. In the classification phase, the basic idea is to separate the interpolation region in the form of geometrical shape. Then, in the intensity determination phase, the interpolator assigns a proper color intensity value to the undefined pixel inside the interpolation region. This paper will discuss about problem statement, literature review, research methodology, research outcome, initial results, and finally, the conclusion.

  13. Color gradient background-oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Mier, Frank Austin; Hargather, Michael J.

    2016-06-01

    Background-oriented schlieren is a method of visualizing refractive disturbances by comparing digital images with and without a refractive disturbance distorting a background pattern. Traditionally, backgrounds consist of random distributions of high-contrast color transitions or speckle patterns. To image a refractive disturbance, a digital image correlation algorithm is used to identify the location and magnitude of apparent pixel shifts in the background pattern between the two images. Here, a novel method of using color gradient backgrounds is explored as an alternative that eliminates the need to perform a complex image correlation between the digital images. A simple image subtraction can be used instead to identify the location, magnitude, and direction of the image distortions. Gradient backgrounds are demonstrated to provide quantitative data only limited by the camera's pixel resolution, whereas speckle backgrounds limit resolution to the size of the random pattern features and image correlation window size. Quantitative measurement of density in a thermal boundary layer is presented. Two-dimensional gradient backgrounds using multiple colors are demonstrated to allow measurement of two-dimensional refractions. A computer screen is used as the background, which allows for rapid modification of the gradient to tune sensitivity for a particular application.

  14. Matching image color from different cameras

    NASA Astrophysics Data System (ADS)

    Fairchild, Mark D.; Wyble, David R.; Johnson, Garrett M.

    2008-01-01

    Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color differences between the measured ColorChecker SG and the reproduced image were 4.0 and 6.1 for the Canon (chart and spectral respectively) and 5.9 and 6.9 for the Nikon. The median differences between cameras were 2.8 and 3.4 for the chart and spectral characterizations, near the expected threshold for reliable image difference perception. Eight scenes were evaluated psychophysically in three forced-choice experiments in which a reference image from one of the cameras was shown to observers in comparison with a pair of images, one from each camera. The three experiments were (1) a comparison of the two cameras with the chart-based characterizations, (2) a comparison with the spectral characterizations, and (3) a comparison of chart vs. spectral characterization within and across cameras. The results for the three experiments are 64%, 64%, and 55% correct respectively. Careful and simple colorimetric characterization of digital SLR cameras can result in visually equivalent color reproduction.

  15. Color night vision method based on the correlation between natural color and dual band night image

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Bai, Lian-fa; Zhang, Chuang; Chen, Qian; Gu, Guo-hua

    2009-07-01

    Color night vision technology can effectively improve the detection and identification probability. Current color night vision method based on gray scale modulation fusion, spectrum field fusion, special component fusion and world famous NRL method, TNO method will bring about serious color distortion, and the observers will be visual tired after long time observation. Alexander Toet of TNO Human Factors presents a method to fuse multiband night image a natural day time color appearance, but it need the true color image of the scene to be observed. In this paper we put forward a color night vision method based on the correlation between natural color image and dual band night image. Color display is attained through dual-band low light level images and their fusion image. Actual color image of the similar scene is needed to obtain color night vision image, the actual color image is decomposed to three gray-scale images of RGB color module, and the short wave LLL image, long wave LLL image and their fusion image are compared to them through gray-scale spatial correlation method, and the color space mapping scheme is confirmed by correlation. Gray-scale LLL images and their fusion image are adjusted through the variation of HSI color space coefficient, and the coefficient matrix is built. Color display coefficient matrix of LLL night vision system is obtained by multiplying the above coefficient matrix and RGB color space mapping matrix. Emulation experiments on general scene dual-band color night vision indicate that the color display effect is approving. This method was experimented on dual channel dual spectrum LLL color night vision experimental apparatus based on Texas Instruments digital video processing device DM642.

  16. Edge and color preserving single image superresolution

    NASA Astrophysics Data System (ADS)

    Tang, Songze; Xiao, Liang; Liu, Pengfei; Zhang, Jun; Huang, Lili

    2014-05-01

    Most existing superresolution (SR) techniques focus primarily on improving the quality in the luminance component of SR images, while paying less attention to the chrominance component. We present an edge and color preserving image SR approach. First, for the luminance channel, a heavy-tailed gradient distribution of natural images is investigated as an image prior. Then, an efficient optimization algorithm is developed to recover the latent high-resolution (HR) luminance component. Second, for the chrominance channels, we propose a two-stage framework for luminance-guided chrominance SR. In the first stage, since most of the shape and structural information is contained in the luminance channel, a simple Markov random field formulation is introduced to search the optimal direction for color local interpolation guided by HR luminance components. To further improve the quality of the chrominance channels, in the second stage, a nonlocal auto regression model is utilized to refine the initial HR chrominance. Finally, we combine the SR reconstructed luminance components with the generated HR chrominance maps to get the final SR color image. Systematic experimental results demonstrated that our method outperforms some state-of-the-art methods in terms of the peak signal-to-noise ratio, structural similarity, feature similarity, and the mean color errors.

  17. Textured surface identification in noisy color images

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet

    1996-06-01

    Automatic identification of textured surfaces is essential in many imaging applications such as image data compression and scene recognition. In these applications, a vision system is required to detect and identify irregular textures in the noisy color images. This work proposes a method for texture field characterization based on the local textural features. We first divide a given color image into n multiplied by n local windows and extract textural features in each window independently. In this step, the size of a window should be small enough so that each window can include only two texture fields. Separation of texture areas in a local window is first carried out by the Otsu or Kullback threshold selection technique on three color components separately. The 3-D class separation is then performed using the Fisher discriminant. The result of local texture classification is combined by the K-means clustering algorithm. The texture fields detected in a window are characterized by their mean vectors and an element-to-set membership relation. We have experimented with the local feature extraction part of the method using a color image of irregular textures. Results show that the method is effective for capturing the local textural features.

  18. Color gradient background oriented schlieren imaging

    NASA Astrophysics Data System (ADS)

    Mier, Frank Austin; Hargather, Michael

    2015-11-01

    Background oriented schlieren (BOS) imaging is a method of visualizing refractive disturbances through the comparison of digital images. By comparing images with and without a refractive disturbance visualizations can be achieved via a range of image processing methods. Traditionally, backgrounds consist of random distributions of high contrast speckle patterns. To image a refractive disturbance, a digital image correlation algorithm is used to identify the location and magnitude of apparent pixel shifts in the background pattern. Here a novel method of using color gradient backgrounds is explored as an alternative. The gradient background eliminates the need to perform an image correlation between the two digital images, as simple image subtraction can be used to identify the location, magnitude, and direction of the image distortions. This allows for quicker processing. Two-dimensional gradient backgrounds using multiple colors are shown. The gradient backgrounds are demonstrated to provide quantitative data limited only by the camera's pixel resolution, whereas speckle backgrounds limit resolution to the size of the random pattern features and image correlation window size. Additional results include the use of a computer screen as a background.

  19. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data

  20. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  1. Improved colorization for night vision system based on image splitting

    NASA Astrophysics Data System (ADS)

    Ali, E.; Kozaitis, S. P.

    2015-03-01

    The success of a color night navigation system often depends on the accuracy of the colors in the resulting image. Often, small regions can incorrectly adopt the color of large regions simply due to size of the regions. We presented a method to improve the color accuracy of a night navigation system by initially splitting a fused image into two distinct sections before colorization. We split a fused image into two sections, generally road and sky regions, before colorization and processed them separately to obtain improved color accuracy of each region. Using this approach, small regions were colored correctly when compared to not separating regions.

  2. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    NASA Astrophysics Data System (ADS)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  3. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  4. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2008-09-02

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  5. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2009-02-24

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  6. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  7. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  8. The Artist, the Color Copier, and Digital Imaging.

    ERIC Educational Resources Information Center

    Witte, Mary Stieglitz

    The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…

  9. Autonomous color theme extraction from images using saliency

    NASA Astrophysics Data System (ADS)

    Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.

    2015-03-01

    Color theme (palette) is a collection of color swatches for representing or describing colors in a visual design or an image. Color palettes have broad applications such as serving as means in automatic/semi-automatic design of visual media, as measures in quantifying aesthetics of visual design, and as metrics in image retrieval, image enhancement, and color semantics. In this paper, we suggest an autonomous mechanism for extracting color palettes from an image. Our method is simple and fast, and it works on the notion of visual saliency. By using visual saliency, we extract the fine colors appearing in the foreground along with the various colors in the background regions of an image. Our method accounts for defining different numbers of colors in the palette as well as presenting the proportion of each color according to its visual conspicuity in a given image. This flexibility supports an interactive color palette which may facilitate the designer's color design task. As an application, we present how our extracted color palettes can be utilized as a color similarity metric to enhance the current color semantic based image retrieval techniques.

  10. Color image registration based on quaternion Fourier transformation

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wang, Zhengzhi

    2012-05-01

    The traditional Fourier Mellin transform is applied to quaternion algebra in order to investigate quaternion Fourier transformation properties useful for color image registration in frequency domain. Combining with the quaternion phase correlation, we propose a method for color image registration based on the quaternion Fourier transform. The registration method, which processes color image in a holistic manner, is convenient to realign color images differing in translation, rotation, and scaling. Experimental results on different types of color images indicate that the proposed method not only obtains high accuracy in similarity transform in the image plane but also is computationally efficient.

  11. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  12. TECHNIQUE FOR ENHANCING DIGITAL COLOR IMAGES BY CONTRAST STRETCHING IN MUNSELL COLOR SPACE.

    USGS Publications Warehouse

    Kruse, Fred A.; Raines, Gary L.

    1984-01-01

    The Munsell color system can be used to further enhance the appearance of high-quality digital color-composite images. A color-balanced 'standard' color-composite image is first produced using any desired contrast stretching algorithm. The stretched digital data are then transformed into the cylindrical Munsell color space. An enhanced version of a color-composite image is produced by stretching the saturation parameter over the full digital range and inverting the modified Munsell coordinates to red-blue-green (tristimulus) data space. The resulting image has greater color-saturation contrast than the original image, without hue change. Contrast stretching in Munsell color space reduces the correlation between individual bands or ratios and is similar to decorrelation processing based on principal-components transforms. However, principal components are based on data variance, with less variance being explained by each higher order component.

  13. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  14. Implementation of high-resolution time-to-digital converter in 8-bit microcontrollers.

    PubMed

    Bengtsson, Lars E

    2012-04-01

    This paper will demonstrate how a time-to-digital converter (TDC) with sub-nanosecond resolution can be implemented into an 8-bit microcontroller using so called "direct" methods. This means that a TDC is created using only five bidirectional digital input-output-pins of a microcontroller and a few passive components (two resistors, a capacitor, and a diode). We will demonstrate how a TDC for the range 1-10 μs is implemented with 0.17 ns resolution. This work will also show how to linearize the output by combining look-up tables and interpolation. PMID:22559576

  15. Implementation of high-resolution time-to-digital converter in 8-bit microcontrollers

    NASA Astrophysics Data System (ADS)

    Bengtsson, Lars E.

    2012-04-01

    This paper will demonstrate how a time-to-digital converter (TDC) with sub-nanosecond resolution can be implemented into an 8-bit microcontroller using so called "direct" methods. This means that a TDC is created using only five bidirectional digital input-output-pins of a microcontroller and a few passive components (two resistors, a capacitor, and a diode). We will demonstrate how a TDC for the range 1-10 μs is implemented with 0.17 ns resolution. This work will also show how to linearize the output by combining look-up tables and interpolation.

  16. Performance of concatenated codes using 8-bit and 10-bit Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Cheung, K.-M.

    1989-01-01

    The performance improvement of concatenated coding systems using 10-bit instead of 8-bit Reed-Solomon codes is measured by simulation. Three inner convolutional codes are considered: (7,1/2), (15,1/4), and (15,1/6). It is shown that approximately 0.2 dB can be gained at a bit error rate of 10(-6). The loss due to nonideal interleaving is also evaluated. Performance comparisons at very low bit error rates may be relevant for systems using data compression.

  17. Rectangular pixels for efficient color image sampling

    NASA Astrophysics Data System (ADS)

    Singh, Tripurari; Singh, Mritunjay

    2011-01-01

    We present CFA designs that faithfully capture images with specified luminance and chrominance bandwidths. Previous academic research has mostly been concerned with maximizing PSNR of reconstructed images without regard to chrominance bandwidth and cross-talk. Commercial systems, on the other hand, pay close attention to both these parameters as well as to the visual quality of reconstructed images. They commonly sacrifice resolution by using a sufficiently aggressive OLPF to achieve low cross-talk and artifact free images. In this paper, we present the so called Chrominance Bandwidth Ratio, r, model in an attempt to capture both the chrominance bandwidth and the cross-talk between the various signals. Next, we examine the effect of tuning photosite aspect ratio, a hitherto neglected design parameter, and show the benefit of setting it at a different value than the pixel aspect ratio of the display. We derive panchromatic CFA patterns that provably minimize the photo-site count for all values of r. An interesting outcome is a CFA design that captures full chrominance bandwidth, yet uses fewer photosites than the venerable color-stripe design. Another interesting outcome is a low cost practical CFA design that captures chrominance at half the resolution of luminance using only 4 unique filter colors, that lends itself to efficient linear demosaicking, and yet vastly outperforms the Bayer CFA with identical number of photosites demosaicked with state of the art compute-intensive nonlinear algorithms.

  18. Structure preserving color deconvolution for immunohistochemistry images

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Srinivas, Chukka

    2015-03-01

    Immunohistochemistry (IHC) staining is an important technique for the detection of one or more biomarkers within a single tissue section. In digital pathology applications, the correct unmixing of the tissue image into its individual constituent dyes for each biomarker is a prerequisite for accurate detection and identification of the underlying cellular structures. A popular technique thus far is the color deconvolution method1 proposed by Ruifrok et al. However, Ruifrok's method independently estimates the individual dye contributions at each pixel which potentially leads to "holes and cracks" in the cells in the unmixed images. This is clearly inadequate since strong spatial dependencies exist in the tissue images which contain rich cellular structures. In this paper, we formulate the unmixing algorithm into a least-square framework of image patches, and propose a novel color deconvolution method which explicitly incorporates the spatial smoothness and structure continuity constraint into a neighborhood graph regularizer. An analytical closed-form solution to the cost function is derived for this algorithm for fast implementation. The algorithm is evaluated on a clinical data set containing a number of 3,3-Diaminobenzidine (DAB) and hematoxylin (HTX) stained IHC slides and demonstrates better unmixing results than the existing strategy.

  19. Invariant quaternion radial harmonic Fourier moments for color image retrieval

    NASA Astrophysics Data System (ADS)

    Xiang-yang, Wang; Wei-yi, Li; Hong-ying, Yang; Pan-pan, Niu; Yong-wei, Li

    2015-03-01

    Moments and moment invariants have become a powerful tool in image processing owing to their image description capability and invariance property. But, conventional methods are mainly introduced to deal with the binary or gray-scale images, and the only approaches for color image always have poor color image description capability. Based on radial harmonic Fourier moments (RHFMs) and quaternion, we introduced the quaternion radial harmonic Fourier moments (QRHFMs) for representing color images in this paper, which can be seen as the generalization of RHFMs for gray-level images. It is shown that the QRHFMs can be obtained from the RHFMs of each color channel. We derived and analyzed the rotation, scaling, and translation (RST) invariant property of QRHFMs. We also discussed the problem of color image retrieval using invariant QRHFMs. Experimental results are provided to illustrate the efficiency of the proposed color image representation.

  20. A dendritic lattice neural network for color image segmentation

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Lara-Rodríguez, Luis David; López-Meléndez, Elizabeth

    2015-09-01

    A two-layer dendritic lattice neural network is proposed to segment color images in the Red-Green-Blue (RGB) color space. The two layer neural network is a fully interconnected feed forward net consisting of an input layer that receives color pixel values, an intermediate layer that computes pixel interdistances, and an output layer used to classify colors by hetero-association. The two-layer net is first initialized with a finite small subset of the colors present in the input image. These colors are obtained by means of an automatic clustering procedure such as k-means or fuzzy c-means. In the second stage, the color image is scanned on a pixel by pixel basis where each picture element is treated as a vector and feeded into the network. For illustration purposes we use public domain color images to show the performance of our proposed image segmentation technique.

  1. RGB calibration for color image analysis in machine vision.

    PubMed

    Chang, Y C; Reid, J F

    1996-01-01

    A color calibration method for correcting the variations in RGB color values caused by vision system components was developed and tested in this study. The calibration scheme concentrated on comprehensively estimating and removing the RGB errors without specifying error sources and their effects. The algorithm for color calibration was based upon the use of a standardized color chart and developed as a preprocessing tool for color image analysis. According to the theory of image formation, RGB errors in color images were categorized into multiplicative and additive errors. Multiplicative and additive errors contained various error sources-gray-level shift, a variation in amplification and quantization in camera electronics or frame grabber, the change of color temperature of illumination with time, and related factors. The RGB errors of arbitrary colors in an image were estimated from the RGB errors of standard colors contained in the image. The color calibration method also contained an algorithm for correcting the nonuniformity of illumination in the scene. The algorithm was tested under two different conditions-uniform and nonuniform illumination in the scene. The RGB errors of arbitrary colors in test images were almost completely removed after color calibration. The maximum residual error was seven gray levels under uniform illumination and 12 gray levels under nonuniform illumination. Most residual RGB errors were caused by residual nonuniformity of illumination in images, The test results showed that the developed method was effective in correcting the variations in RGB color values caused by vision system components. PMID:18290059

  2. Mosaicking of NEAR MSI Color Image Sequences

    NASA Astrophysics Data System (ADS)

    Digilio, J. G.; Robinson, M. S.

    2004-05-01

    Of the over 160,000 frames of 433 Eros captured by the NEAR-Shoemaker spacecraft, 21,936 frames are components of 226 multi-spectral image sequences. As part of the ongoing NEAR Data Analysis Program, we are mosaicking (and delivering via a web interface) all color sequences in two versions: I/F and photometrically normalized I/F (30° incidence, 0° emission). Multi-spectral sets were acquired with varying bandpasses depending on mission constraints, and all sets include 550-nm, 760-nm, and 950-nm (32% of the sequences are all wavelengths except 700-nm clear filter). Resolutions range from 20 m/pixel down to 3.5 m/pixel. To support color analysis and interpretation we are co-registering the highest resolution black and white images to match each of the color mosaics. Due to Eros's highly irregular shape, the scale of a pixel can vary by almost a factor of 2 within a single frame acquired in the 35-km orbit. Thus, map-projecting requires a pixel-by-pixel correction for local topography [1]. Scattered light problems with the NEAR Multi-Spectral Imager (MSI) required the acquisition of ride along zero exposure calibration frames. Without correction, scattered light artifacts within the MSI were larger than the subtle color differences found on Eros [see details in 2]. Successful correction requires that the same region of the surface (within a few pixels) be in the field-of-view of the zero-exposure frame as when the normal frame was acquired. Due to engineering constraints the timing of frame acquisition was not always optimal for the scattered light correction. During the co-registration process we are tracking apparent ground motion during a sequence to estimate the efficacy of the correction, and thus integrity of the color information. Currently several web-based search and browse tools allow interested users to locate individual MSI frames from any spot on the asteroid using various search criteria (cps.earth.northwestern.edu). Final color and BW map products

  3. Color enhancement in multispectral image of human skin

    NASA Astrophysics Data System (ADS)

    Mitsui, Masanori; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2003-07-01

    Multispectral imaging is receiving attention in medical color imaging, as high-fidelity color information can be acquired by the multispectral image capturing. On the other hand, as color enhancement in medical color image is effective for distinguishing lesion from normal part, we apply a new technique for color enhancement using multispectral image to enhance the features contained in a certain spectral band, without changing the average color distribution of original image. In this method, to keep the average color distribution, KL transform is applied to spectral data, and only high-order KL coefficients are amplified in the enhancement. Multispectral images of human skin of bruised arm are captured by 16-band multispectral camera, and the proposed color enhancement is applied. The resultant images are compared with the color images reproduced assuming CIE D65 illuminant (obtained by natural color reproduction technique). As a result, the proposed technique successfully visualizes unclear bruised lesions, which are almost invisible in natural color images. The proposed technique will provide support tool for the diagnosis in dermatology, visual examination in internal medicine, nursing care for preventing bedsore, and so on.

  4. Computer Program Helps Enhance Images

    NASA Technical Reports Server (NTRS)

    Stanfill, Daniel F., IV

    1994-01-01

    Pixel Pusher is Macintosh application program for viewing and performing minor enhancements on imagery. Works with color images digitized to 8 bits. Reads image files in JPL's two primary image formats VICAR and PDS as well as in Macintosh PICT format. VICAR (NPO-18076) handles array of image-processing capabilities used for variety of applications, including processing of biomedical images, cartography, imaging of Earth resources, and geological exploration. Pixel Pusher also imports color lookup tables in VICAR format for viewing images in pseudocolor (256 colors). Written in Symantec's Think C.

  5. Color contrast enhancement method of infrared polarization fused image

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Xie, Chen

    2015-10-01

    As the traditional color fusion method based on color transfer algorithm has an issue that the color of target and background is similar. A kind of infrared polarization image color fusion method based on color contrast enhancement was proposed. Firstly the infrared radiation intensity image and the polarization image were color fused, and then color transfer technology was used between color reference image and initial fused image in the YCbCr color space. Secondly Otsu segmentation method was used to extract the target area image from infrared polarization image. Lastly the H,S,I component of the color fusion image which obtained by color transfer was adjusted to obtain the final fused image by using target area in the HSI space. Experimental results show that, the fused result which obtained by the proposed method is rich in detail and makes the contrast of target and background more outstanding. And then the ability of target detection and identification can be improved by the method.

  6. Color Sequence of Triton Approach Images

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Triton Voyager 2 approach sequence with latitude-longitude grid superposed. The color image was reconstructed by making a computer composite of three black and white images taken through red, green, and blue filters. Details on Triton's surface unfold dramatically in this sequence of approach images. South Pole near the bottom of the images at the convergence of lines of longitude. Resolution changes from about 60 km/pixel (37 mi/pixel) in the image at upper left taken from a distance of 500,000 (311,000 mi) to about 5 km/pixel (3.1 mi/pixel) for the image at lower right. Global and regional albedo features are visible in all of the images. The albedo features can be tracked in successive images and show that Triton has undergone about 3/4 of a rotation during the 4.3-day interval over which these images were obtained. A southern polar cap of bright pink, yellow, and white materials covers nearly all of the southern hemisphere; these materials consist of nitrogen ice with traces of other substances, including frozen methane and carbon monoxide. Feeble ultraviolet radiation from the sun is thought to act on methane to cause chemical reactions to the pinkish yellowish substances. At the time of the Voyager 2 flyby (Jan. 1989) Triton's southern hemisphere was starting the summer season and the South Pole was canted toward the sun day and night, such that the polar cap was sublimating under the relatively 'hot' summer sun (surface temperature about 38 K, about -391 degree F). Numerous dark streaks on the southern polar nitrogen-ice cap are thought to consist of dark dust deposited by prevailing winds in Triton's tenuous nitrogen atmosphere. A bluish band, seen in all of the images, nearly circumstances Triton's equator; this band is thought to consist of fairly nitrogen frost, perhaps deposited in the decade prior to Voyager 2's flyby.

  7. Color Enhancement in Endoscopic Images Using Adaptive Sigmoid Function and Space Variant Color Reproduction.

    PubMed

    Imtiaz, Mohammad S; Wahid, Khan A

    2015-01-01

    Modern endoscopes play an important role in diagnosing various gastrointestinal (GI) tract related diseases. The improved visual quality of endoscopic images can provide better diagnosis. This paper presents an efficient color image enhancement method for endoscopic images. It is achieved in two stages: image enhancement at gray level followed by space variant chrominance mapping color reproduction. Image enhancement is achieved by performing adaptive sigmoid function and uniform distribution of sigmoid pixels. Secondly, a space variant chrominance mapping color reproduction is used to generate new chrominance components. The proposed method is used on low contrast color white light images (WLI) to enhance and highlight the vascular and mucosa structures of the GI tract. The method is also used to colorize grayscale narrow band images (NBI) and video frames. The focus value and color enhancement factor show that the enhancement level in the processed image is greatly increased compared to the original endoscopic image. The overall contrast level of the processed image is higher than the original image. The color similarity test has proved that the proposed method does not add any additional color which is not present in the original image. The algorithm has low complexity with an execution speed faster than other related methods. PMID:26089969

  8. Hyperspectral image analysis using artificial color

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  9. Color-invariant three-dimensional feature descriptor for color-shift-model-based image processing

    NASA Astrophysics Data System (ADS)

    Lim, Joohyun; Paik, Joonki

    2011-11-01

    We present a novel color-invariant depth feature descriptor for color-shift-model (CSM)-based image processing. Color images acquired by a single camera equipped with multiple color-filter aperture (MCA) contain depth-dependent color misalignment. The amount and direction of the misalignment provides object's distance from the camera. The CSM-based image processing, which represents the combined image-acquisition and depth-estimation framework, requires a color-invariant feature descriptor that can convey depth information. For improving depth-estimation performance, color boosting is performed on a color image acquired by the MCA camera, and CSM-based channel-shifting descriptor vectors, or channel-shifting vectors (CSVs), are generated by using the feasibility test. Color-invariant features are also extracted in the luminance image. The proposed color-invariant three-dimensional (3-D) feature descriptor is finally obtained by combining the CSVs and luminance features. We present experimental analysis of the proposed feature descriptor and show that the descriptors are proportional to the depth of an object. The proposed descriptor can be used for feature-based image matching in various applications, including 3-D scene modeling, 3-D object recognition, 3-D video tracking, and multifocusing, to name a few.

  10. Acceleration of color computer-generated hologram from RGB-D images using color space conversion

    NASA Astrophysics Data System (ADS)

    Hiyama, Daisuke; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2015-04-01

    We report acceleration of color computer-generated holograms (CGHs) from three dimensional (3D) scenes that are expressed as RGB and depth (D) images. These images are captured by a depth camera and depth buffer of 3D graphics library. RGB and depth images preserve color and depth information of 3D scene, respectively. Then we can regard them as two-dimensional (2D) section images along the depth direction. In general, convolution-based diffraction such as the angular spectrum method is used in calculating CGHs from the 2D section images. However, it takes enormous amount of time because of multiple diffraction calculations. In this paper, we first describe 'band-limited double-step Fresnel diffraction (BL-DSF)' which can accelerate the diffraction calculation than convolution-based diffraction. Next, we describe acceleration of color CGH using color space conversion. Color CGHs are generally calculated on RGB color space; however, we need to perform the same calculations for each color component repeatedly, so that computational cost of color CGH calculation is three times as that of monochrome CGH calculation. Instead, we use YCbCr color space because the 2D section images on YCbCr color space can be down-sampled without deterioration of the image quality.

  11. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  12. Color appearance for photorealistic image synthesis

    NASA Astrophysics Data System (ADS)

    Marini, Daniele; Rizzi, Alessandro; Rossi, Maurizio

    2000-12-01

    Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.

  13. Color reproductivity improvement with additional virtual color filters for WRGB image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2013-02-01

    We have developed a high accuracy color reproduction method based on an estimated spectral reflectance of objects using additional virtual color filters for a wide dynamic range WRGB color filter CMOS image sensor. The four virtual color filters are created by multiplying the spectral sensitivity of White pixel by gauss functions which have different central wave length and standard deviation, and the virtual sensor outputs of those virtual filters are estimated from the four real output signals of the WRGB image sensor. The accuracy of color reproduction was evaluated with a Macbeth Color Checker (MCC), and the averaged value of the color difference ΔEab of 24 colors was 1.88 with our approach.

  14. Binarization of color document images via luminance and saturation color features.

    PubMed

    Tsai, Chun-Ming; Lee, Hsi-Jian

    2002-01-01

    This paper presents a novel binarization algorithm for color document images. Conventional thresholding methods do not produce satisfactory binarization results for documents with close or mixed foreground colors and background colors. Initially, statistical image features are extracted from the luminance distribution. Then, a decision-tree based binarization method is proposed, which selects various color features to binarize color document images. First, if the document image colors are concentrated within a limited range, saturation is employed. Second, if the image foreground colors are significant, luminance is adopted. Third, if the image background colors are concentrated within a limited range, luminance is also applied. Fourth, if the total number of pixels with low luminance (less than 60) is limited, saturation is applied; else both luminance and saturation are employed. Our experiments include 519 color images, most of which are uniform invoice and name-card document images. The proposed binarization method generates better results than other available methods in shape and connected-component measurements. Also, the binarization method obtains higher recognition accuracy in a commercial OCR system than other comparable methods. PMID:18244645

  15. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  16. High Quality Color Imaging on the Mead Microencapsulated Imaging System Using a Fiber Optic CRT

    NASA Astrophysics Data System (ADS)

    Duke, Ronald J.

    1989-07-01

    Mead Imaging's unique microencapsulated color imaging system (CYCOLOR) has many applications. Mead Imaging and Hughes have combined CYCOLOR and Fiber Optic Cathode Ray Tubes (FOCRT) to develop digital color printers.

  17. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  18. Mississippi Delta, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The geography of the New Orleans and Mississippi delta region is well shown in this radar image from the Shuttle Radar Topography Mission. In this image, bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is situated along the southern shore of Lake Pontchartrain, the large, roughly circular lake near the center of the image. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest over water highway bridge. Major portions of the city of New Orleans are below sea level, and although it is protected by levees and sea walls, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data

  19. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  20. Appearance can be deceiving: using appearance models in color imaging

    NASA Astrophysics Data System (ADS)

    Johnson, Garrett M.

    2007-01-01

    As color imaging has evolved through the years, our toolset for understanding has similarly evolved. Research in color difference equations and uniform color spaces spawned tools such as CIELAB, which has had tremendous success over the years. Research on chromatic adaptation and other appearance phenomena then extended CIELAB to form the basis of color appearance models, such as CIECAM02. Color difference equations such as CIEDE2000 evolved to reconcile weaknesses in areas of the CIELAB space. Similarly, models such as S-CIELAB were developed to predict more spatially complex color difference calculations between images. Research in all of these fields is still going strong and there seems to be a trend towards unification of some of the tools, such as calculating color differences in a color appearance space. Along such lines, image appearance models have been developed that attempt to combine all of the above models and metric into one common framework. The goal is to allow the color imaging research to pick and choose the appropriate modeling toolset for their needs. Along these lines, the iCAM image appearance model framework was developed to study a variety of color imaging problems. These include image difference and image quality evaluations as well gamut mapping and high-dynamic range (HDR) rendering. It is important to stress that iCAM was not designed to be a complete color imaging solution, but rather a starting point for unifying models of color appearance, color difference, and spatial vision. As such the choice of model components is highly dependent on the problem being addressed. For example, with CIELAB it clearly evident that it is not necessary to use the associated color difference equations to have great success as a deviceindependent color space. Likewise, it may not be necessary to use the spatial filtering components of an image appearance model when performing image rendering. This paper attempts to shed some light on some of the

  1. Color Voyager 2 Image Showing Crescent Uranus

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This image shows a crescent Uranus, a view that Earthlings never witnessed until Voyager 2 flew near and then beyond Uranus on January 24, 1986. This planet's natural blue-green color is due to the absorption of redder wavelengths in the atmosphere by traces of methane gas. Uranus' diameter is 32,500 miles, a little over four times that of Earth. The hazy blue-green atmosphere probably extends to a depth of around 5,400 miles, where it rests above what is believed to be an icy or liquid mixture (an 'ocean') of water, ammonia, methane, and other volatiles, which in turn surrounds a rocky core perhaps a little smaller than Earth.

  2. Color Doppler imaging of retinal diseases.

    PubMed

    Dimitrova, Galina; Kato, Satoshi

    2010-01-01

    Color Doppler imaging (CDI) is a widely used method for evaluating ocular circulation that has been used in a number of studies on retinal diseases. CDI assesses blood velocity parameters by using ultrasound waves. In ophthalmology, these assessments are mainly performed on the retrobulbar blood vessels: the ophthalmic, the central retinal, and the short posterior ciliary arteries. In this review, we discuss CDI use for the assessment of retinal diseases classified into the following: vascular diseases, degenerations, dystrophies, and detachment. The retinal vascular diseases that have been investigated by CDI include diabetic retinopathy, retinal vein occlusions, retinal artery occlusions, ocular ischemic conditions, and retinopathy of prematurity. Degenerations and dystrophies included in this review are age-related macular degeneration, myopia, and retinitis pigmentosa. CDI has been used for the differential diagnosis of retinal detachment, as well as the evaluation of retrobulbar circulation in this condition. CDI is valuable for research and is a potentially useful diagnostic tool in the clinical setting. PMID:20385332

  3. Tiny Devices Project Sharp, Colorful Images

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  4. Natural and seamless image composition with color control.

    PubMed

    Yang, Wenxian; Zheng, Jianmin; Cai, Jianfei; Rahardja, Susanto; Chen, Chang Wen

    2009-11-01

    While the state-of-the-art image composition algorithms subtly handle the object boundary to achieve seamless image copy-and-paste, it is observed that they are unable to preserve the color fidelity of the source object, often require quite an amount of user interactions, and often fail to achieve realism when there exists salient discrepancy between the background textures in the source and destination images. These observations motivate our research towards color controlled natural and seamless image composition with least user interactions. In particular, based on the Poisson image editing framework, we first propose a variational model that considers both the gradient constraint and the color fidelity. The proposed model allows users to control the coloring effect caused by gradient domain fusion. Second, to have less user interactions, we propose a distance-enhanced random walks algorithm, through which we avoid the necessity of accurate image segmentation while still able to highlight the foreground object. Third, we propose a multiresolution framework to perform image compositions at different subbands so as to separate the texture and color components to simultaneously achieve smooth texture transition and desired color control. The experimental results demonstrate that our proposed framework achieves better and more realistic results for images with salient background color or texture differences, while providing comparable results as the state-of-the-art algorithms for images without the need of preserving the object color fidelity and without significant background texture discrepancy. PMID:19596637

  5. A probabilistic approach for color correction in image mosaicking applications.

    PubMed

    Oliveira, Miguel; Sappa, Angel Domingo; Santos, Vitor

    2015-02-01

    Image mosaicking applications require both geometrical and photometrical registrations between the images that compose the mosaic. This paper proposes a probabilistic color correction algorithm for correcting the photometrical disparities. First, the image to be color corrected is segmented into several regions using mean shift. Then, connected regions are extracted using a region fusion algorithm. Local joint image histograms of each region are modeled as collections of truncated Gaussians using a maximum likelihood estimation procedure. Then, local color palette mapping functions are computed using these sets of Gaussians. The color correction is performed by applying those functions to all the regions of the image. An extensive comparison with ten other state of the art color correction algorithms is presented, using two different image pair data sets. Results show that the proposed approach obtains the best average scores in both data sets and evaluation metrics and is also the most robust to failures. PMID:25438315

  6. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  7. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-06-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired.

  8. Demosaiced pixel super-resolution for multiplexed holographic color imaging.

    PubMed

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  9. EVALUATION OF COLOR ALTERATION ON FABRICS BY IMAGE ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evaluation of color changes is usually done manually and is often inconsistent. Image analysis provides a method in which to evaluate color-related testing that is not only simple, but also consistent. Image analysis can also be used to measure areas that were considered too large for the colorimet...

  10. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  11. Color image encryption scheme using CML and DNA sequence operations.

    PubMed

    Wang, Xing-Yuan; Zhang, Hui-Li; Bao, Xue-Mei

    2016-06-01

    In this paper, an encryption algorithm for color images using chaotic system and DNA (Deoxyribonucleic acid) sequence operations is proposed. Three components for the color plain image is employed to construct a matrix, then perform confusion operation on the pixels matrix generated by the spatiotemporal chaos system, i.e., CML (coupled map lattice). DNA encoding rules, and decoding rules are introduced in the permutation phase. The extended Hamming distance is proposed to generate new initial values for CML iteration combining color plain image. Permute the rows and columns of the DNA matrix and then get the color cipher image from this matrix. Theoretical analysis and experimental results prove the cryptosystem secure and practical, and it is suitable for encrypting color images of any size. PMID:27026385

  12. Exploring the use of memory colors for image enhancement

    NASA Astrophysics Data System (ADS)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  13. Processing halftone color images by vector space methods.

    PubMed

    Liu, Li; Yang, Yongyi; Stark, Henry

    2006-02-01

    The reproduction of color images by color halftoning can be characterized by the Neugebauer model/equation. However, the Neugebauer equation is not easy to solve because of the highly nonlinear relationship between the underlying Neugebauer primaries and the colorants. We attempt to solve the Neugebauer equation by vector space methods. The proposed method of solution is applicable to any number of colorants, although our experimental results are confined to the CMY and CMYK cases. Among the constraints we consider are those related to a bound on the permissible amount of total ink and a bound on the total cost of applying colorants to achieve a satisfactory level of color reproduction. Our results demonstrate that the vector space method is a feasible approach for solving for the required amounts of colorants in the constrained color halftoning problem. PMID:16477829

  14. Evaluation of color error and noise on simulated images

    NASA Astrophysics Data System (ADS)

    Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle

    2010-01-01

    The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.

  15. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. PMID:25400037

  16. Color preservation for tone reproduction and image enhancement

    NASA Astrophysics Data System (ADS)

    Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh

    2014-01-01

    Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.

  17. Unsupervised color image segmentation using a lattice algebra clustering technique

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  18. Identification of canonical neural events during continuous gameplay of an 8-bit style video game.

    PubMed

    Cavanagh, James F; Castellanos, Joel

    2016-06-01

    Cognitive neuroscience suffers from a unique and pervasive problem of generalizability. Since neural findings are often interpreted in the context of a specific manipulation during a carefully controlled task, it is hard to transfer knowledge from one task to another. In this report we address problems of generalizability with two methodological advancements. First, we aimed to transcend status quo experimental procedures with a continuous, engaging task environment. To this end, we created a novel 8-bit style continuous space shooter video game that elicits a multitude of goal-oriented events, such as crashing into a wall or blowing up an enemy with a missile. Second, we aimed to objectively define the psychological significance of these events. To achieve this aim, we used pattern classification of EEG data to derive predictive weights from carefully controlled pre-game exemplar events (oddball target detection and gambling wins and losses) and transferred those weights to EEG activities during video game events. All major goal-oriented events (crashes into the wall, crashes into an enemy, missile hit on an enemy) had a significant between-task transfer bias towards oddball target weights in the time range of the canonical P3, indicating the presence of similar salience detection processes. Missile hits on an enemy were specifically identified as gambling wins, confirming the hypothesis that this goal-oriented event was appetitive. These findings suggest that it is possible to identify the contribution of canonical neural activities during otherwise ambiguous and uncontrolled task performance. PMID:26952196

  19. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images

    PubMed Central

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit

    2015-01-01

    Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571

  20. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  1. Colored adaptive compressed imaging with a single photodiode.

    PubMed

    Yan, Yiyun; Dai, Huidong; Liu, Xingjiong; He, Weiji; Chen, Qian; Gu, Guohua

    2016-05-10

    Computational ghost imaging is commonly used to reconstruct grayscale images. Currently, however, there is little research aimed at reconstructing color images. In this paper, we theoretically and experimentally demonstrate a colored adaptive compressed imaging method. Benefiting from imaging in YUV color space, the proposed method adequately exploits the sparsity of the U, V components in the wavelet domain, the interdependence between luminance and chrominance, and human visual characteristics. The simulation and experimental results show that our method greatly reduces the measurements required and offers better image quality compared to recovering the red (R), green (G), and blue (B) components separately in RGB color space. As the application of a single photodiode increases, our method shows great potential in many fields. PMID:27168280

  2. Refinement of Colored Mobile Mapping Data Using Intensity Images

    NASA Astrophysics Data System (ADS)

    Yamakawa, T.; Fukano, K.; Onodera, R.; Masuda, H.

    2016-06-01

    Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  3. Spatial imaging in color and HDR: prometheus unchained

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  4. Nonlocal Mumford-Shah regularizers for color image restoration.

    PubMed

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F; Vese, Luminita A

    2011-06-01

    We propose here a class of restoration algorithms for color images, based upon the Mumford-Shah (MS) model and nonlocal image information. The Ambrosio-Tortorelli and Shah elliptic approximations are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, texture is nonlocal in nature and requires semilocal/non-local information for efficient image denoising and restoration. Inspired from recent works (nonlocal means of Buades, Coll, Morel, and nonlocal total variation of Gilboa, Osher), we extend the local Ambrosio-Tortorelli and Shah approximations to MS functional (MS) to novel nonlocal formulations, for better restoration of fine structures and texture. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, color image super-resolution, and color filter array demosaicing. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. We also prove several characterizations of minimizers based upon dual norm formulations. PMID:21078579

  5. Objective color classification of ecstasy tablets by hyperspectral imaging.

    PubMed

    Edelman, Gerda; Lopatka, Martin; Aalders, Maurice

    2013-07-01

    The general procedure followed in the examination of ecstasy tablets for profiling purposes includes a color description, which depends highly on the observers' perception. This study aims to provide objective quantitative color information using visible hyperspectral imaging. Both self-manufactured and illicit tablets, created with different amounts of known colorants were analyzed. We derived reflectance spectra from hyperspectral images of these tablets, and successfully determined the most likely colorant used in the production of all self-manufactured tablets and four of five illicit tablets studied. Upon classification, the concentration of the colorant was estimated using a photon propagation model and a single reference measurement of a tablet of known concentration. The estimated concentrations showed a high correlation with the actual values (R(2) = 0.9374). The achieved color information, combined with other physical and chemical characteristics, can provide a powerful tool for the comparison of tablet seizures, which may reveal their origin. PMID:23683098

  6. A 256 channel 8-Bit current digitizer ASIC for the Belle-II PXD

    NASA Astrophysics Data System (ADS)

    Knopf, J.; Fischer, P.; Kreidl, C.; Peric, I.

    2011-01-01

    The international DEPFET collaboration is developing a silicon pixel vertex detector (PXD), based on monolithic arrays of DEPFET transistors, for the future physics experiment Belle-II at the SuperKEKB particle accelerator in Japan. The matrix elements are read out in a 'rolling shutter mode', i.e. rows are selected consecutively and all columns are read out in each cycle of < 100 ns. One of the major parts in the front-end electronics chain is the DEPFET Current Digitizer ASIC (DCDB). It is now in a close-to-final state. The chip provides 256 channels of analog-to-digital converters with a resolution of six to eight bits. Each converter features an individual dynamic offset correction circuit as well as programmable gain and bandwidth. Several operation modes using single sampling or double correlated sampling are possible. A large synthesized digital block is used for decoding and derandomization of the conversion results. The data is put out on eight 8-bit links, operating at a speed of 400 MHz. Additionally, a JTAG compatible interface is implemented for configuration and debugging purpose. Significant effort was made to reduce the power consumption of the DCDB, since both, voltage drop on the internal power buses and heat sources in the Belle-II experiment are a concern. The chip was realized on a 3.2mm × 5mm die using the UMC 180nm CMOS technology in a multi-project wafer run, provided by EuroPractice. An extra redistribution metal layer with bump bond pads is used, allowing for flipping the chip onto the final all-silicon DEPFET sensor module. Several tests have been performed in order to prove the chip's operation and its quality in terms of noise. The results are presented.

  7. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  8. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  9. Color image reproduction: the evolution from print to multimedia

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.

    1997-02-01

    The electronic pre-press industry has undergone a very rapid evolution over the past decade, driven by the accelerating performance of desktop computers and affordable application software for image manipulation, page layout and color separation. These have been supported by the steady development of colo scanners, digital cameras, proof printers, RIPs and image setters, all of which make the process of reproducing color images in print easier than ever before. But is color print itself in decline as a medium? New channels of delivery for digital color images include CD-ROM, wideband networks and the Internet, with soft-copy screen display competing with hard-copy print for applications ranging from corporate brochures to home shopping. Present indications are that the most enduring of the graphic arts skills in the new multimedia world will be image rendering and production control rather than those related to photographic film and ink on paper.

  10. Colorful holographic imaging reconstruction based on one thin phase plate

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Song, Qiang; Wang, Jian; Yue, Weirui; Zhang, Fang; Huang, Huijie

    2014-11-01

    One method of realizing color holographic imaging using one thin diffractive optical element (DOE) is proposed. This method can reconstruct a two-dimensional color image with one phase plate at user defined distance from DOE. For improving the resolution ratio of reproduced color images, the DOE is optimized by combining Gerchberg-Saxton algorithm and compensation algorithm. To accelerate the computational process, the Graphic Processing Unit (GPU) is used. In the end, the simulation result was analyzed to verify the validity of this method.

  11. Improving color saturation for color managed images rendered using the perceptual intent

    NASA Astrophysics Data System (ADS)

    Marcu, Gabriel G.

    2008-01-01

    In many cases, rendering images using color management approach may result in unsatisfactory color, particularly for cases when the gamut mismatch is large and the source / destination profile pair does not lead to a satisfactory color. This more often the case when images on laptop computer screens with limited color gamut are transferred to print and color management is used. For those cases, we present a method of improving image quality by manipulating the display profile such that the color quality of the printouts is not compromised by the small gamut of the portable display and color management. The basic idea consists of using in the color management pipeline of a virtual gamut that has the role of either the source or of the destination depending on the type of transformation and the gamut size of the source and destination in the color management pipeline. In case the mismatch between the source and destination gamut is under a threshold the virtual gamut is not used. This virtual gamut is constructed directly in the CIE 1931 chromaticity diagram, although other color spaces may be used. A procedure to derive a constant hue line from two adjacent lines is presented. The chromaticities of the virtual gamut are computed based on the replaced gamut chromaticities and a weighting factor computed automatically at the time of rendering. The method proves to give very pleasing results in prints for example and the boost in saturation approximates very well the color enhancement achieved in silver halide photographic prints even for relatively modest print media.

  12. Reflectance model for recto-verso color halftone images

    NASA Astrophysics Data System (ADS)

    Tian, Dongwen; Wang, Qingjuan; Zhang, Yixin

    2012-01-01

    In the color reproduction process, accurately predicting the color of recto-verso images and establishing a spectral reflectance model for halftones images are the great concern project of imaging quality control field. The scattering of light within paper and the ink penetration in the substrate are the key factors, which affect the color reproduction. A reflectance model for recto-verso color halftone prints is introduced in this paper which considers these factors. The paper based on the assumption that the colorant is non-scattering and the assumption that the paper is strong scattering substrate. By the multiple internal reflection between the paper substrate and the print-air interface of light, and the light along oblique path of the Williams-Clapper model, we proposed the color spectral reflectance precise prediction model of recto-verso halftone images. In the study, we propose this model for taking into account ink spreading, a phenomenon that occurs when printing an ink halftone in superposition with one or several solid inks. The ink-spreading model includes nominal-to-effective dot area coverage functions for each of the different ink overprint conditions by the least square curve fitting method, so the functions for physical dot gain of various overprint halftones are given. This model provided a theoretical foundation for color prediction analysis of recto-verso halftone images and the development of image quality detection system.

  13. Colored three-dimensional reconstruction of vehicular thermal infrared images

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Leung, Henry; Shen, Zhenyi

    2015-06-01

    Enhancement of vehicular night vision thermal infrared images is an important problem in intelligent vehicles. We propose to create a colorful three-dimensional (3-D) display of infrared images for the vehicular night vision assistant driving system. We combine the plane parameter Markov random field (PP-MRF) model-based depth estimation with classification-based infrared image colorization to perform colored 3-D reconstruction of vehicular thermal infrared images. We first train the PP-MRF model to learn the relationship between superpixel features and plane parameters. The infrared images are then colorized and we perform superpixel segmentation and feature extraction on the colorized images. The PP-MRF model is used to estimate the superpixel plane parameter and to analyze the structure of the superpixels according to the characteristics of vehicular thermal infrared images. Finally, we estimate the depth of each pixel to perform 3-D reconstruction. Experimental results demonstrate that the proposed method can give a visually pleasing and daytime-like colorful 3-D display from a monochromatic vehicular thermal infrared image, which can help drivers to have a better understanding of the environment.

  14. A New Color Image of the Crab Nebula

    NASA Astrophysics Data System (ADS)

    Wainscoat, R. J.; Kormendy, K.

    1997-03-01

    A new color image of the Crab Nebula is presented. This is a $2782 \\times 1904$ pixel mosaic of CCD frames taken through \\B\\ (blue), \\V\\ (green), and \\R\\ (red) filters; it was carefully color balanced so that the Sun would appear white. The resolution of the final image is approximately 0\\farcs8 FWHM. The technique by which this image was constructed is described, and some aspects of the structure of the Crab Nebula revealed by the image are discussed. We also discuss the weaknesses of this technique for producing ``true-color'' images, and describe how our image would differ from what the human eye might see in a very large wide-field telescope. The structure of the inner part of the synchrotron nebula is compared with recent high-resolution images from the {\\it Hubble Space Telescope\\/} and from the Canada-France-Hawaii Telescope. (SECTION: Interstellar Medium and Nebulae)

  15. Ultrasound, color - normal umbilical cord (image)

    MedlinePlus

    ... is a normal color Doppler ultrasound of the umbilical cord performed at 30 weeks gestation. The cord ... the cord, two arteries and one vein. The umbilical cord is connected to the placenta, located in ...

  16. Four-Channel, 8 x 8 Bit, Two-Dimensional Parallel Transmission by use of Space-Code-Division Multiple-Access Encoder and Decoder Modules.

    PubMed

    Nakamura, M; Kitayama, K; Igasaki, Y; Kaneda, K

    1998-07-10

    We experimentally demonstrate four-channel multiplexing of 64-bit (8 x 8) two-dimensional (2-D) parallel data links on the basis of optical space-code-division multiple access (CDMA) by using new modules of optical spatial encoders and a decoder with a new high-contrast 9-m-long image fiber with 3 x 10(4) cores. Each 8 x 8 bit plane (64-bit parallel data) is optically encoded with an 8 x 8, 2-D optical orthogonal signature pattern. The encoded bit planes are spatially multiplexed and transmitted through an image fiber. A receiver can recover the intended input bit plane by means of an optical decoding process. This result should encourage the application of optical space-CDMA to future high-throughput 2-D parallel data links connecting massively parallel processors. PMID:18285889

  17. Semi-Automated Segmentation of Microbes in Color Images

    NASA Astrophysics Data System (ADS)

    Reddy, Chandankumar K.; Liu, Feng-I.; Dazzo, Frank B.

    2003-01-01

    The goal of this work is to develop a system that can semi-automate the detection of multicolored foreground objects in digitized color images that also contain complex and very noisy backgrounds. Although considered a general problem of color image segmentation, our application is microbiology where various colored stains are used to reveal information on the microbes without cultivation. Instead of providing a simple threshold, the proposed system offers an interactive environment whereby the user chooses multiple sample points to define the range of color pixels comprising the foreground microbes of interest. The system then uses the color and spatial distances of these target points to segment the microbes from the confusing background of pixels whose RGB values lie outside the newly defined range and finally finds each cell's boundary using region-growing and mathematical morphology. Some other image processing methods are also applied to enhance the resultant image containing the colored microbes against a noise-free background. The prototype performs with 98% accuracy on a test set compared to ground truth data. The system described here will have many applications in image processing and analysis where one needs to segment typical pixel regions of similar but non-identical colors.

  18. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  19. Skin image reconstruction using Monte Carlo based color generation

    NASA Astrophysics Data System (ADS)

    Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji

    2010-11-01

    We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.

  20. MUNSELL COLOR ANALYSIS OF LANDSAT COLOR-RATIO-COMPOSITE IMAGES OF LIMONITIC AREAS IN SOUTHWEST NEW MEXICO.

    USGS Publications Warehouse

    Kruse, Fred A.

    1984-01-01

    Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.

  1. Color image digitization and analysis for drum inspection

    SciTech Connect

    Muller, R.C.; Armstrong, G.A.; Burks, B.L.; Kress, R.L.; Heckendorn, F.M.; Ward, C.R.

    1993-05-01

    A rust inspection system that uses color analysis to find rust spots on drums has been developed. The system is composed of high-resolution color video equipment that permits the inspection of rust spots on the order of 0.25 cm (0.1-in.) in diameter. Because of the modular nature of the system design, the use of open systems software (X11, etc.), the inspection system can be easily integrated into other environmental restoration and waste management programs. The inspection system represents an excellent platform for the integration of other color inspection and color image processing algorithms.

  2. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature. PMID:26618250

  3. Color Image Restoration Using Nonlocal Mumford-Shah Regularizers

    NASA Astrophysics Data System (ADS)

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.

    We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.

  4. Color image based sorter for separating red and white wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A simple imaging system was developed to inspect and sort wheat samples and other grains at moderate feed-rates (30 kernels/s or 3.5 kg/h). A single camera captured color images of three sides of each kernel by using mirrors, and the images were processed using a personal computer (PC). The camer...

  5. Photographic copy of computer enhanced color photographic image. Photographer and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of computer enhanced color photographic image. Photographer and computer draftsman unknown. Original photographic image located in the office of Modjeski and Masters, Consulting Engineers at 1055 St. Charles Avenue, New Orleans, LA 70130. COMPUTER ENHANCED COLOR PHOTOGRAPH SHOWING THE PROPOSED HUEY P. LONG BRIDGE WIDENING LOOKING FROM THE WEST BANK TOWARD THE EAST BANK. - Huey P. Long Bridge, Spanning Mississippi River approximately midway between nine & twelve mile points upstream from & west of New Orleans, Jefferson, Jefferson Parish, LA

  6. a New Color Correction Method for Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.

    2015-04-01

    Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.

  7. Comparison of perceptual color spaces for natural image segmentation tasks

    NASA Astrophysics Data System (ADS)

    Correa-Tome, Fernando E.; Sanchez-Yanez, Raul E.; Ayala-Ramirez, Victor

    2011-11-01

    Color image segmentation largely depends on the color space chosen. Furthermore, spaces that show perceptual uniformity seem to outperform others due to their emulation of the human perception of color. We evaluate three perceptual color spaces, CIELAB, CIELUV, and RLAB, in order to determine their contribution to natural image segmentation and to identify the space that obtains the best results over a test set of images. The nonperceptual color space RGB is also included for reference purposes. In order to quantify the quality of resulting segmentations, an empirical discrepancy evaluation methodology is discussed. The Berkeley Segmentation Dataset and Benchmark is used in test series, and two approaches are taken to perform the experiments: supervised pixelwise classification using reference colors, and unsupervised clustering using k-means. A majority filter is used as a postprocessing stage, in order to determine its contribution to the result. Furthermore, a comparison of elapsed times taken by the required transformations is included. The main finding of our study is that the CIELUV color space outperforms the other color spaces in both discriminatory performance and computational speed, for the average case.

  8. Dominant color correlogram descriptor for content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Fierro-Radilla, Atoany; Perez-Daniel, Karina; Nakano-Miyatake, Mariko; Benois, Jenny

    2015-03-01

    Content-based image retrieval (CBIR) has become an interesting and urgent research topic due to the increase of necessity of indexing and classification of multimedia content in large databases. The low level visual descriptors, such as color-based, texture-based and shape-based descriptors, have been used for the CBIR task. In this paper we propose a color-based descriptor which describes well image contents, integrating both global feature provided by dominant color and local features provided by color correlogram. The performance of the proposed descriptor, called Dominant Color Correlogram descriptor (DCCD), is evaluated comparing with some MPEG-7 visual descriptors and other color-based descriptors reported in the literature, using two image datasets with different size and contents. The performance of the proposed descriptor is assessed using three different metrics commonly used in image retrieval task, which are ARP (Average Retrieval Precision), ARR (Average Retrieval Rate) and ANMRR (Average Normalized Modified Retrieval Rank). Also precision-recall curves are provided to show a better performance of the proposed descriptor compared with other color-based descriptors.

  9. Stereoscopic high-speed imaging using additive colors

    NASA Astrophysics Data System (ADS)

    Sankin, Georgy N.; Piech, David; Zhong, Pei

    2012-04-01

    An experimental system for digital stereoscopic imaging produced by using a high-speed color camera is described. Two bright-field image projections of a three-dimensional object are captured utilizing additive-color backlighting (blue and red). The two images are simultaneously combined on a two-dimensional image sensor using a set of dichromatic mirrors, and stored for off-line separation of each projection. This method has been demonstrated in analyzing cavitation bubble dynamics near boundaries. This technique may be useful for flow visualization and in machine vision applications.

  10. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  11. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  12. Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This DS consists of the locally enhanced ALOS image mosaics for each of the 24 mineral project areas (referred to herein as areas of interest), whose locality names, locations, and main mineral occurrences are shown on the index map of Afghanistan (fig. 1). ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency, but the image processing has altered the original pixel structure and all image values of the JAXA

  13. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  14. Lensfree color imaging on a nanostructured chip using compressive decoding

    PubMed Central

    Khademhosseinieh, Bahar; Biener, Gabriel; Sencan, Ikbal; Ozcan, Aydogan

    2010-01-01

    We demonstrate subpixel level color imaging capability on a lensfree incoherent on-chip microscopy platform. By using a nanostructured substrate, the incoherent emission from the object plane is modulated to create a unique far-field diffraction pattern corresponding to each point at the object plane. These lensfree diffraction patterns are then sampled in the far-field using a color sensor-array, where the pixels have three different types of color filters at red, green, and blue (RGB) wavelengths. The recorded RGB diffraction patterns (for each point on the structured substrate) form a basis that can be used to rapidly reconstruct any arbitrary multicolor incoherent object distribution at subpixel resolution, using a compressive sampling algorithm. This lensfree computational imaging platform could be quite useful to create a compact fluorescent on-chip microscope that has color imaging capability. PMID:21173866

  15. Full-color holographic 3D imaging system using color optical scanning holography

    NASA Astrophysics Data System (ADS)

    Kim, Hayan; Kim, You Seok; Kim, Taegeun

    2016-06-01

    We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.

  16. Color calculations for and perceptual assessment of computer graphic images

    SciTech Connect

    Meyer, G.W.

    1986-01-01

    Realistic image synthesis involves the modelling of an environment in accordance with the laws of physics and the production of a final simulation that is perceptually acceptable. To be considered a scientific endeavor, synthetic image generation should also include the final step of experimental verification. This thesis concentrates on the color calculations that are inherent in the production of the final simulation and on the perceptual assessment of the computer graphic images that result. The fundamental spectral sensitivity functions that are active in the human visual system are introduced and are used to address color-blindness issues in computer graphics. A digitally controlled color television monitor is employed to successfully implement both the Farnsworth Munsell 100 hues test and a new color vision test that yields more accurate diagnoses. Images that simulate color blind vision are synthesized and are used to evaluate color scales for data display. Gaussian quadrature is used with a set of opponent fundamental to select the wavelengths at which to perform synthetic image generation.

  17. Color transformation for the compression of CMYK images

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.

    1999-12-01

    A CMYK image is often viewed as a large amount of device- dependent data ready to be printed. In several circumstances, CMYK data needs to be compressed, but the conversion to and from device-independent spaces is imprecise at best. In this paper, with the goal of compressing CMYK images, color space transformations were studied. In order to have a practical importance we developed a new transformation to a YYCC color space, which is device-independent and image-independent, i.e. a simple linear transformation between device-dependent color spaces. The transformation from CMYK to YYCC was studied extensively in image compression. For that a distortion measure that would account for both device-dependence and spatial visual sensitivity has been developed. It is shown that transformation to YYCC consistently outperforms the transformation to other device-dependent 4D color spaces such as YCbCrK, while being competitive with the image- dependent KLT-based approach. Other interesting conclusions were also drawn from the experiments, among them the fact that color transformations are not always advantageous over independent compression of CMYK color planes and the fact that chrominance subsampling is rarely advantageous.

  18. Weighted color and texture sample selection for image matting.

    PubMed

    Varnousfaderani, Ehsan Shahrian; Rajan, Deepu

    2013-11-01

    Color sampling based matting methods find the best known samples for foreground and background colors of unknown pixels. Such methods do not perform well if there is an overlap in the color distribution of foreground and background regions because color cannot distinguish between these regions and hence, the selected samples cannot reliably estimate the matte. Furthermore, current sampling based matting methods choose samples that are located around the boundaries of foreground and background regions. In this paper, we overcome these two problems. First, we propose texture as a feature that can complement color to improve matting by discriminating between known regions with similar colors. The contribution of texture and color is automatically estimated by analyzing the content of the image. Second, we combine local sampling with a global sampling scheme that prevents true foreground or background samples to be missed during the sample collection stage. An objective function containing color and texture components is optimized to choose the best foreground and background pair among a set of candidate pairs. Experiments are carried out on a benchmark data set and an independent evaluation of the results shows that the proposed method is ranked first among all other image matting methods. PMID:23807448

  19. Multiple color-image authentication system using HSI color space and QR decomposition in gyrator domains

    NASA Astrophysics Data System (ADS)

    Rafiq Abuturab, Muhammad

    2016-06-01

    A new multiple color-image authentication system based on HSI (Hue-Saturation-Intensity) color space and QR decomposition in gyrator domains is proposed. In this scheme, original color images are converted from RGB (Red-Green-Blue) color spaces to HSI color spaces, divided into their H, S, and I components, and then obtained corresponding phase-encoded components. All the phase-encoded H, S, and I components are individually multiplied, and then modulated by random phase functions. The modulated H, S, and I components are convoluted into a single gray image with asymmetric cryptosystem. The resulting image is segregated into Q and R parts by QR decomposition. Finally, they are independently gyrator transformed to get their encoded parts. The encoded Q and R parts should be gathered without missing anyone for decryption. The angles of gyrator transform afford sensitive keys. The protocol based on QR decomposition of encoded matrix and getting back decoded matrix after multiplying matrices Q and R, enhances the security level. The random phase keys, individual phase keys, and asymmetric phase keys provide high robustness to the cryptosystem. Numerical simulation results demonstrate that this scheme is the superior than the existing techniques.

  20. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the North Bamyan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Bamyan mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  1. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ahankashan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ahankashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008, 2009, 2010),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this

  2. False-color composite image of Raco, Michigan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This image is a false color composite of Raco, Michigan, centered at 46.39 north latitude and 84.88 east longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area shown is approximately 20 kilometers by 50 kilometers. Raco is located at the eastern end of Michigan's upper peninsula, west of Sault Ste. Marie and south of Whitefish Bay on Lake Superior. In this color representation, darker areas in the image are smooth surfaces such as frozen lakes and other non-forested areas. The colors are related to the types of trees and the brightness is related to the amount of plant material covering the surface, called forest biomass. The Jet Propulsion Laboratory alternative photo number is P-43882.

  3. Color image encoding in DOST domain using DWT and SVD

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Agrawal, Smita

    2015-12-01

    In this paper, a new color image encoding and decoding technique based on Discrete Orthonormal Stockwell Transform (DOST) using Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD) is proposed. The images are encrypted using bands of DOST and wavelets along with singular values of wavelet subbands. We have used the number of bands of DOST, values and arrangement of some predefined parameters using singular values of all wavelet subbands and arrangement of wavelet subbands as encoding and decoding keys in all three color planes. To ensure the correct decoding of the encoded image, it is necessary to use all the keys in correct order along with their exact values. The comparison of our technique with one of the recently proposed techniques and experimental results is used to analyze the effectiveness of the proposed technique. The proposed technique can be used for transmitting a color image more securely and efficiently through both secured and unsecured communication network.

  4. Color image enhancement based on HVS and MSRCR

    NASA Astrophysics Data System (ADS)

    Xue, Rong kun; Li, Yu feng

    2015-10-01

    Due to inclement weather caused frequently, such as clouds, fog , rain etc. The light intensity on the illuminated objects falls sharply, it make the scenes captured unclear, poor visual quality and low contrast degree. To improve the overall quality of these images, especially the bad illuminated images, the paper propose a new color image enhancement algorithm which is based on multi-scale Retinex theory with color recovering factor (MSRCR) and the human visual system (HVS). It can effectively solve the problem of the color balance of digital images by removing the influence of light and obtain component images reflected the reflex of the object surface, meanwhile, reduce the impact of non-artificial factors and overcome the Ringing effect and human interference. Through comparison and contrast among experiments, that combined evaluated parameters on enhancement image, such as variance, average gradient, sharpness and so forth with the traditional image enhancement methods, such as histogram enhancement, adaptive histogram enhancement, the MSRCR algorithm is proved to be effective in image contrast, detail enhancement and color fidelity, etc.

  5. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the South Bamyan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Bamyan mineral district, which has areas with a spectral reflectance anomaly that require field investigation. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that

  6. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Nuristan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nuristan mineral district, which has gem, lithium, and cesium deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS

  7. Color image reproduction based on multispectral and multiprimary imaging: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Teraji, Taishi; Ohsawa, Kenro; Uchiyama, Toshio; Motomura, Hideto; Murakami, Yuri; Ohyama, Nagaaki

    2001-12-01

    Multispectral imaging is significant technology for the acquisition and display of accurate color information. Natural color reproduction under arbitrary illumination becomes possible using spectral information of both image and illumination light. In addition, multiprimary color display, i.e., using more than three primary colors, has been also developed for the reproduction of expanded color gamut, and for discounting observer metamerism. In this paper, we present the concept for the multispectral data interchange for natural color reproduction, and the experimental results using 16-band multispectral camera and 6-primary color display. In the experiment, the accuracy of color reproduction is evaluated in CIE (Delta) Ea*b* for both image capture and display systems. The average and maximum (Delta) Ea*b* = 1.0 and 2.1 in 16-band mutispectral camera system, using Macbeth 24 color patches. In the six-primary color projection display, average and maximum (Delta) Ea*b* = 1.3 and 2.7 with 30 test colors inside the display gamut. Moreover, the color reproduction results with different spectral distributions but same CIE tristimulus value are visually compared, and it is confirmed that the 6-primary display gives improved agreement between the original and reproduced colors.

  8. Visual cryptography for JPEG color images

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.

    2004-10-01

    There have been a large number of methods proposed for encrypting images by shared key encryption mechanisms. All the existing techniques are applicable to primarily non-compressed images. However, most imaging applications including digital photography, archiving, and internet communications nowadays use images in the JPEG domain. Application of the existing shared key cryptographic schemes for these images requires conversion back into spatial domain. In this paper we propose a shared key algorithm that works directly in the JPEG domain, thus enabling shared key image encryption for a variety of applications. The scheme directly works on the quantized DCT coefficient domain and the resulting noise-like shares are also stored in the JPEG format. The decryption process is lossless. Our experiments indicate that each share image is approximately the same size as the original JPEG retaining the storage advantage provided by JPEG.

  9. Color impact in visual attention deployment considering emotional images

    NASA Astrophysics Data System (ADS)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  10. Perceptual quality metric of color quantization errors on still images

    NASA Astrophysics Data System (ADS)

    Pefferkorn, Stephane; Blin, Jean-Louis

    1998-07-01

    A new metric for the assessment of color image coding quality is presented in this paper. Two models of chromatic and achromatic error visibility have been investigated, incorporating many aspects of human vision and color perception. The achromatic model accounts for both retinal and cortical phenomena such as visual sensitivity to spatial contrast and orientation. The chromatic metric is based on a multi-channel model of human color vision that is parameterized for video coding applications using psychophysical experiments, assuming that perception of color quantization errors can be assimilated to perception of supra-threshold local color-differences. The final metric is a merging of the chromatic model and the achromatic model which accounts for phenomenon as masking. The metric is tested on 6 real images at 5 quality levels using subjective assessments. The high correlation between objective and subjective scores shows that the described metric accurately rates the rendition of important features of the image such as color contours and textures.

  11. Image visualization based on MPEG-7 color descriptors

    NASA Astrophysics Data System (ADS)

    Meiers, Thomas; Czernoch-Peters, H.; Ihlenburg, L.; Sikora, Thomas

    2000-05-01

    In this paper we address the user-navigation through large volumes of image data. A similarity-measure based on MPEG-7 color histograms is introduced and Multidimensional Scaling concepts are employed to display images in two dimensions according to their mutual similarities. With such a view the user can easily see relations and color similarity between images and understand the structure of the data base. In order to cope with large volumes of images a modified version of k-means clustering technique is introduced which identifies representative image samples for each cluster. Representative images (up to 100) are then displayed in two dimensions using MDS structuring. The modified clustering technique proposed produces a hierarchical structure of clusters--similar to street maps with various resolutions of details. The user can zoom into various cluster levels to obtain more or less details if required. The results obtained verify the attractiveness of the approach for navigation and retrieval applications.

  12. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  13. An Effective and Fast Hybrid Framework for Color Image Retrieval

    NASA Astrophysics Data System (ADS)

    Walia, Ekta; Vesal, Sulaiman; Pal, Aman

    2014-11-01

    This paper presents a novel, fast and effective hybrid framework for color image retrieval through combination of all the low level features, which gives higher retrieval accuracy than other such systems. The color moment (CMs), angular radial transform descriptor and edge histogram descriptor (EHD) features are exploited to capture color, shape and texture information respectively. A multistage framework is designed to imitate human perception so that in the first stage, images are retrieved based on their CMs and then the shape and texture descriptors are utilized to identify the closest matches in the second stage. The scheme employs division of images into non-overlapping regions for effective computation of CMs and EHD features. To demonstrate the efficacy of this framework, experiments are conducted on Wang's, VisTex and OT-Scene databases. Inspite of its multistage design, the system is observed to be faster than other hybrid approaches.

  14. Digital watermarking for color images in hue-saturation-value color space

    NASA Astrophysics Data System (ADS)

    Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.

    2014-05-01

    This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.

  15. Web Services for Dynamic Coloring of UAVSAR Images

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Pierce, Marlon; Donnellan, Andrea; Parker, Jay

    2015-08-01

    QuakeSim has implemented a service-based Geographic Information System to enable users to access large amounts of Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) data through an online interface. The QuakeSim Interferometric Synthetic Aperture Radar (InSAR) profile tool calculates radar-observed displacement (from an unwrapped interferogram product) along user-specified lines. Pre-rendered thumbnails with InSAR fringe patterns are used to display interferogram and unwrapped phase images on a Google Map in the InSAR profile tool. One challenge with this tool lies in the user visually identifying regions of interest when drawing the profile line. This requires that the user correctly interpret the InSAR imagery, which currently uses fringe patterns. The mapping between pixel color and pixel value is not a one-to-one relationship from the InSAR fringe pattern, and it causes difficulty in understanding general displacement information for QuakeSim users. The goal of this work is to generate color maps that directly reflect the pixel values (displacement) as an addition to the pre-rendered images. Because of an extremely uneven distribution of pixel values on an InSAR image, a histogram-based, nonlinear color template generation algorithm is currently under development. A web service enables on-the-fly coloring of UAVSAR images with dynamically generated color templates.

  16. SCID: full reference spatial color image quality metric

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Herbin, M.; Zagrouba, E.

    2009-01-01

    The most used full reference image quality assessments are error-based methods. Thus, these measures are performed by pixel based difference metrics like Delta E ( E), MSE, PSNR, etc. Therefore, a local fidelity of the color is defined. However, these metrics does not correlate well with the perceived image quality. Indeed, they omit the properties of the HVS. Thus, they cannot be a reliable predictor of the perceived visual quality. All this metrics compute the differences pixel to pixel. Therefore, a local fidelity of the color is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we present a novel full reference color metric that is based on characteristics of the human visual system by considering the notion of adjacency. This metric called SCID for Spatial Color Image Difference, is more perceptually correlated than other color differences such as Delta E. The suggested full reference metric is generic and independent of image distortion type. It can be used in different application such as: compression, restoration, etc.

  17. Restoration of cloud contaminated ocean color images using numerical simulation

    NASA Astrophysics Data System (ADS)

    Yang, Xuefei; Mao, Zhihua; Chen, Jianyu; Huang, Haiqing

    2015-10-01

    It is very hard to access cloud-free remote sensing data, especially for the ocean color images. A cloud removal approach from ocean color satellite images based on numerical modeling is introduced. The approach removes cloud-contaminated portions and then reconstructs the missing data utilizing model simulated values. The basic idea is to create the relationship between cloud-free patches and cloud-contaminated patches under the assumption that both of them are influenced by the same marine hydrodynamic conditions. Firstly, we find cloud-free GOCI (the Geostationary Ocean Color Imager) retrieved suspended sediment concentrations (SSC) in the East China Sea before and after the time of cloudy images, which are set as initial field and validation data for numerical model, respectively. Secondly, a sediment transport model based on COHERENS, a coupled hydrodynamic-ecological ocean model for regional and shelf seas, is configured. The comparison between simulated results and validation images show that the sediment transport model can be used to simulate actual sediment distribution and transport in the East China Sea. Then, the simulated SSCs corresponding to the cloudy portions are used to remove the cloud and replace the missing values. Finally, the accuracy assessments of the results are carried out by visual and statistical analysis. The experimental results demonstrate that the proposed method can effectively remove cloud from GOCI images and reconstruct the missing data, which is a new way to enhance the effectiveness and availability of ocean color data, and is of great practical significance.

  18. Implementation of a multi-spectral color imaging device without color filter array

    NASA Astrophysics Data System (ADS)

    Langfelder, G.; Longoni, A. F.; Zaraga, F.

    2011-01-01

    In this work the use of the Transverse Field Detector (TFD) as a device for multispectral image acquisition is proposed. The TFD is a color imaging pixel capable of color reconstruction without color filters. Its basic working principle is based on the generation of a suitable electric field configuration inside a Silicon depleted region by means of biasing voltages applied to surface contacts. With respect to previously proposed methods for performing multispectral capture, the TFD has a unique characteristic of electrically tunable spectral responses. This feature allows capturing an image with different sets of spectral responses (RGB, R'G'B', and so on) simply by tuning the device biasing voltages in multiple captures. In this way no hardware complexity (no external filter wheels or varying sources) is added with respect to a colorimetric device. The estimation of the spectral reflectance of the area imaged by a TFD pixel is based in this work on a linear combination of six eigenfunctions. It is shown that a spectral reconstruction can be obtained either (1) using two subsequent image captures that generate six TFD spectral responses or (2) using a new asymmetric biasing scheme, which allows the implementation of five spectral responses for each TFD pixel site in a single configuration, definitely allowing one-shot multispectral imaging.

  19. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  20. Two-color ghost imaging with enhanced angular resolving power

    SciTech Connect

    Karmakar, Sanjit; Shih, Yanhua

    2010-03-15

    This article reports an experimental demonstration on nondegenerate, two-color, biphoton ghost imaging which reproduced a ghost image with enhanced angular resolving power by means of a greater field of view compared with that of classical imaging. With the same imaging magnification, the enhanced angular resolving power and field of view compared with those of classical imaging are 1.25:1 and 1.16:1, respectively. The enhancement of angular resolving power depends on the ratio between the idler and the signal photon frequencies, and the enhancement of the field of view depends mainly on the same ratio and also on the distances of the object plane and the imaging lens from the two-photon source. This article also reports the possibility of reproducing a ghost image with the enhancement of the angular resolving power by means of a greater imaging amplification compared with that of classical imaging.

  1. Digital images for eternity: color microfilm as archival medium

    NASA Astrophysics Data System (ADS)

    Normand, C.; Gschwind, R.; Fornaro, P.

    2007-01-01

    In the archiving and museum communities, the long-term preservation of artworks has traditionally been guaranteed by making duplicates of the original. For photographic reproductions, digital imaging devices have now become standard, providing better quality control and lower costs than film photography. However, due to the very short life cycle of digital data, losses are unavoidable without repetitive data migrations to new file formats and storage media. We present a solution for the long-term archiving of digital images on color microfilm (Ilfochrome® Micrographic). This extremely stable and high-resolution medium, combined with the use of a novel laser film recorder is particularly well suited for this task. Due to intrinsic limitations of the film, colorimetric reproductions of the originals are not always achievable. The microfilm must be first considered as an information carrier and not primarily as an imaging medium. Color transformations taking into account the film characteristics and possible degradations of the medium due to aging are investigated. An approach making use of readily available color management tools is presented which assures the recovery of the original colors after re-digitization. An extension of this project considering the direct recording of digital information as color bit-code on the film is also introduced.

  2. Electrical Characterization of Hughes HCMP 1852D and RCA CDP1852D 8-bit, CMOS, I/O Ports

    NASA Technical Reports Server (NTRS)

    Stokes, R. L.

    1979-01-01

    Twenty-five Hughes HCMP 1852D and 25 RCA CDP1852D 8-bit, CMOS, I/O port microcircuits underwent electrical characterization tests. All electrical measurements were performed on a Tektronix S-3260 Test System. Before electrical testing, the devices were subjected to a 168-hour burn-in at 125 C with the inputs biased at 10V. Four of the Hughes parts became inoperable during testing. They exhibited functional failures and out-of-range parametric measurements after a few runs of the test program.

  3. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  4. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Tourmaline mineral district in Afghanistan: Chapter J in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Tourmaline mineral district, which has tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products

  5. Predicting beef tenderness using color and multispectral image texture features.

    PubMed

    Sun, X; Chen, K J; Maddock-Carlin, K R; Anderson, V L; Lepper, A N; Schwartz, C A; Keller, W L; Ilse, B R; Magolski, J D; Berg, E P

    2012-12-01

    The objective of this study was to investigate the usefulness of raw meat surface characteristics (texture) in predicting cooked beef tenderness. Color and multispectral texture features, including 4 different wavelengths and 217 image texture features, were extracted from 2 laboratory-based multispectral camera imaging systems. Steaks were segregated into tough and tender classification groups based on Warner-Bratzler shear force. The texture features were submitted to STEPWISE multiple regression and support vector machine (SVM) analyses to establish prediction models for beef tenderness. A subsample (80%) of tender or tough classified steaks were used to train models which were then validated on the remaining (20%) test steaks. For color images, the SVM model correctly identified tender steaks with 100% accurately while the STEPWISE equation identified 94.9% of the tender steaks correctly. For multispectral images, the SVM model predicted 91% and STEPWISE predicted 87% average accuracy of beef tender. PMID:22647652

  6. Outstanding-objects-oriented color image segmentation using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Hayasaka, Rina; Zhao, Jiying; Matsushita, Yutaka

    1997-10-01

    This paper presents a novel fuzzy-logic-based color image segmentation scheme focusing on outstanding objects to human eyes. The scheme first segments the image into rough fuzzy regions, chooses visually significant regions, and conducts fine segmentation on the chosen regions. It can not only reduce the computational load, but also make contour detection easy because the brief object externals has been previously determined. The scheme reflects human sense, and it can be sued efficiently in automatic extraction of image retrieval key, robot vision and region-adaptive image compression.

  7. Color Image Classification Using Block Matching and Learning

    NASA Astrophysics Data System (ADS)

    Kondo, Kazuki; Hotta, Seiji

    In this paper, we propose block matching and learning for color image classification. In our method, training images are partitioned into small blocks. Given a test image, it is also partitioned into small blocks, and mean-blocks corresponding to each test block are calculated with neighbor training blocks. Our method classifies a test image into the class that has the shortest total sum of distances between mean blocks and test ones. We also propose a learning method for reducing memory requirement. Experimental results show that our classification outperforms other classifiers such as support vector machine with bag of keypoints.

  8. Multi-color magnetic particle imaging for cardiovascular interventions

    NASA Astrophysics Data System (ADS)

    Haegele, Julian; Vaalma, Sarah; Panagiotopoulos, Nikolaos; Barkhausen, Jörg; Vogt, Florian M.; Borgert, Jörn; Rahmer, Jürgen

    2016-08-01

    Magnetic particle imaging (MPI) uses magnetic fields to visualize the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). Guidance of cardiovascular interventions is seen as one possible application of MPI. To safely guide interventions, the vessel lumen as well as all required interventional devices have to be visualized and be discernible from each other. Until now, different tracer concentrations were used for discerning devices from blood in MPI, because only one type of SPIO could be imaged at a time. Recently, it was shown for 3D MPI that it is possible to separate different signal sources in one volume of interest, i.e. to visualize and discern different SPIOs or different binding states of the same SPIO. The approach was termed multi-color MPI. In this work, the use of multi-color MPI for differentiation of a SPIO coated guide wire (Terumo Radifocus 0.035″) from the lumen of a vessel phantom filled with diluted Resovist is demonstrated. This is achieved by recording dedicated system functions of the coating material containing solid Resovist and of liquid Resovist, which allows separation of their respective signal in the image reconstruction process. Assigning a color to the different signal sources results in a differentiation of guide wire and vessel phantom lumen into colored images.

  9. Multi-color magnetic particle imaging for cardiovascular interventions.

    PubMed

    Haegele, Julian; Vaalma, Sarah; Panagiotopoulos, Nikolaos; Barkhausen, Jörg; Vogt, Florian M; Borgert, Jörn; Rahmer, Jürgen

    2016-08-21

    Magnetic particle imaging (MPI) uses magnetic fields to visualize the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). Guidance of cardiovascular interventions is seen as one possible application of MPI. To safely guide interventions, the vessel lumen as well as all required interventional devices have to be visualized and be discernible from each other. Until now, different tracer concentrations were used for discerning devices from blood in MPI, because only one type of SPIO could be imaged at a time. Recently, it was shown for 3D MPI that it is possible to separate different signal sources in one volume of interest, i.e. to visualize and discern different SPIOs or different binding states of the same SPIO. The approach was termed multi-color MPI. In this work, the use of multi-color MPI for differentiation of a SPIO coated guide wire (Terumo Radifocus 0.035″) from the lumen of a vessel phantom filled with diluted Resovist is demonstrated. This is achieved by recording dedicated system functions of the coating material containing solid Resovist and of liquid Resovist, which allows separation of their respective signal in the image reconstruction process. Assigning a color to the different signal sources results in a differentiation of guide wire and vessel phantom lumen into colored images. PMID:27476675

  10. Hyperspectral imaging using RGB color for foodborne pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance sp...

  11. Color Image of Phoenix Heat Shield and Bounce Mark

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This shows a color image from Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment camera. It shows the Phoenix heat shield and bounce mark on the Mars surface.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  13. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  14. Luminosity and contrast normalization in color retinal images based on standard reference image

    NASA Astrophysics Data System (ADS)

    S. Varnousfaderani, Ehsan; Yousefi, Siamak; Belghith, Akram; Goldbaum, Michael H.

    2016-03-01

    Color retinal images are used manually or automatically for diagnosis and monitoring progression of a retinal diseases. Color retinal images have large luminosity and contrast variability within and across images due to the large natural variations in retinal pigmentation and complex imaging setups. The quality of retinal images may affect the performance of automatic screening tools therefore different normalization methods are developed to uniform data before applying any further analysis or processing. In this paper we propose a new reliable method to remove non-uniform illumination in retinal images and improve their contrast based on contrast of the reference image. The non-uniform illumination is removed by normalizing luminance image using local mean and standard deviation. Then the contrast is enhanced by shifting histograms of uniform illuminated retinal image toward histograms of the reference image to have similar histogram peaks. This process improve the contrast without changing inter correlation of pixels in different color channels. In compliance with the way humans perceive color, the uniform color space of LUV is used for normalization. The proposed method is widely tested on large dataset of retinal images with present of different pathologies such as Exudate, Lesion, Hemorrhages and Cotton-Wool and in different illumination conditions and imaging setups. Results shows that proposed method successfully equalize illumination and enhances contrast of retinal images without adding any extra artifacts.

  15. Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.

    2016-02-01

    Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Farah mineral district in Afghanistan: Chapter FF in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Farah mineral district, which has spectral reflectance anomalies indicative of copper, zinc, lead, silver, and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Khanneshin mineral district in Afghanistan: Chapter A in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Khanneshin mineral district, which has uranium, thorium, rare-earth-element, and apatite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be

  18. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Nalbandon mineral district in Afghanistan: Chapter L in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nalbandon mineral district, which has lead and zinc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  19. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Panjsher Valley mineral district in Afghanistan: Chapter M in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Panjsher Valley mineral district, which has emerald and silver-iron deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2009, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from

  20. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Balkhab mineral district in Afghanistan: Chapter B in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Balkhab mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match

  1. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kandahar mineral district in Afghanistan: Chapter Z in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kandahar mineral district, which has bauxite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS

  2. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Zarkashan mineral district in Afghanistan: Chapter G in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Zarkashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  3. Use of discrete chromatic space to tune the image tone in a color image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  4. Adaptation and the color statistics of natural images.

    PubMed

    Webster, M A; Mollon, J D

    1997-12-01

    Color perception depends profoundly on adaptation processes that adjust sensitivity in response to the prevailing pattern of stimulation. We examined how color sensitivity and appearance might be influenced by adaptation to the color distributions characteristic of natural images. Color distributions were measured for natural scenes by sampling an array of locations within each scene with a spectroradiometer, or by recording each scene with a digital camera successively through 31 interference filters. The images were used to reconstruct the L, M and S cone excitation at each spatial location, and the contrasts along three post-receptoral axes [L + M, L - M or S - (L + M)]. Individual scenes varied substantially in their mean chromaticity and luminance, in the principal color-luminance axes of their distributions, and in the range of contrasts in their distributions. Chromatic contrasts were biased along a relatively narrow range of bluish to yellowish-green angles, lying roughly between the S - (L + M) axis (which was more characteristic of scenes with lush vegetation and little sky) and a unique blue-yellow axis (which was more typical of arid scenes). For many scenes L - M and S - (L + M) signals were highly correlated, with weaker correlations between luminance and chromaticity. We use a two-stage model (von Kries scaling followed by decorrelation) to show how the appearance of colors may be altered by light adaptation to the mean of the distributions and by contrast adaptation to the contrast range and principal axes of the distributions; and we show that such adjustments are qualitatively consistent with empirical measurements of asymmetric color matches obtained after adaptation to successive random samples drawn from natural distributions of chromaticities and lightnesses. Such adaptation effects define the natural range of operating states of the visual system. PMID:9425544

  5. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Haji-Gak mineral district in Afghanistan: Chapter C in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Haji-Gak mineral district, which has iron ore deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products

  6. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Herat mineral district in Afghanistan: Chapter T in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Herat mineral district, which has barium and limestone deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dusar-Shaida mineral district in Afghanistan: Chapter I in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dusar-Shaida mineral district, which has copper and tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Katawas mineral district in Afghanistan: Chapter N in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Katawas mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©AXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA

  9. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kundalyan mineral district in Afghanistan: Chapter H in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kundalyan mineral district, which has porphyry copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  10. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghunday-Achin mineral district in Afghanistan, in Davis, P.A, compiler, Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghunday-Achin mineral district, which has magnesite and talc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  11. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Aynak mineral district in Afghanistan: Chapter E in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Aynak mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS

  12. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kharnak-Kanjar mineral district in Afghanistan: Chapter K in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kharnak-Kanjar mineral district, which has mercury deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  13. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Badakhshan mineral district in Afghanistan: Chapter F in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Badakhshan mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products

  14. Restoration of color images by multichannel Kalman filtering

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Chin, Roland T.

    1991-01-01

    A Kalman filter for optimal restoration of multichannel images is presented. This filter is derived using a multichannel semicausal image model that includes between-channel degradation. Both stationary and nonstationary image models are developed. This filter is implemented in the Fourier domain and computation is reduced from O(Lambda3N3M4) to O(Lambda3N3M2) for an M x M N-channel image with degradation length Lambda. Color (red, green, and blue (RGB)) images are used as examples of multichannel images, and restoration in the RGB and YIQ domains is investigated. Simulations are presented in which the effectiveness of this filter is tested for different types of degradation and different image model estimates.

  15. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Baghlan mineral district in Afghanistan: Chapter P in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Baghlan mineral district, which has industrial clay and gypsum deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2006, 2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Bakhud mineral district in Afghanistan: Chapter U in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Bakhud mineral district, which has industrial fluorite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Uruzgan mineral district in Afghanistan: Chapter V in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Uruzgan mineral district, which has tin and tungsten deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  18. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the South Helmand mineral district in Afghanistan: Chapter O in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Helmand mineral district, which has travertine deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  19. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the North Takhar mineral district in Afghanistan: Chapter D in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Takhar mineral district, which has placer gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  20. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  1. Image recognition of diseased rice seeds based on color feature

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-11-01

    The objective of this research is to develop a digital image analysis algorithm for detection of diseased rice seeds based on color features. The rice seeds used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou99 and IIyou3207. Images of rice seeds were acquired with a color machine vision system. Each original RGB image was converted to HSV color space and preprocessed to show, as hue in the seed region while the pixels value of background was zero. The hue values were scaled so that they varied from 0.0 to 1.0. Then six color features were extracted and evaluated for their contributions to seed classification. Determined using Blocks method, the mean hue value shows the strongest classification ability. Parzen windowing function method was used to estimate probability density distribution and a threshold of mean hue was drawn to classify normal seeds and diseased seeds. The average accuracy of test data set is 95% for Jinyou402. Then the feature of hue histogram was extracted for diseased seeds and partitioned into two clusters of spot diseased seeds and severe diseased seeds. Desired results were achieved when the two cancroids locations were used to discriminate the disease degree. Combined with the two features of mean hue and histogram, all seeds could be classified as normal seeds, spot diseased seeds and severe diseased seeds. Finally, the algorithm was implemented for all the five varieties to test the adaptability.

  2. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dudkash mineral district in Afghanistan: Chapter R in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dudkash mineral district, which has industrial mineral deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS

  3. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni2 mineral district in Afghanistan: Chapter EE in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni2 mineral district, which has spectral reflectance anomalies indicative of gold, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image

  4. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Takhar mineral district in Afghanistan: Chapter Q in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Takhar mineral district, which has industrial evaporite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  5. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kunduz mineral district in Afghanistan: Chapter S in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kunduz mineral district, which has celestite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the

  6. Color image encryption based on joint fractional Fourier transform correlator

    NASA Astrophysics Data System (ADS)

    Lu, Ding; Jin, Weimin

    2011-06-01

    In this paper, an optical color image encryption/decryption technology based on joint fractional Fourier transform correlator and double random phase encoding (DRPE) is developed. In this method, the joint fractional power spectrum of the image to be encrypted and the key codes is recorded as the encrypted data. Different from the case with classical DRPE, the same key code was used both in the encryption and decryption. The security of the system is enhanced because of the fractional order as a new added key. This method takes full advantage of the parallel processing features of the optical system, and could optically realize single-channel color image encryption. The experimental results indicate that the new method is feasible.

  7. Power Doppler imaging: clinical experience and correlation with color Doppler US and other imaging modalities.

    PubMed

    Hamper, U M; DeJong, M R; Caskey, C I; Sheth, S

    1997-01-01

    Power Doppler imaging has recently gained attention as an additional color flow imaging technique that overcomes some of the limitations of conventional color Doppler ultrasound (US). Limitations of conventional color Doppler US include angle dependence, aliasing, and difficulty in separating background noise from true flow in slow-flow states. Owing to its increased sensitivity to flow, power Doppler sonography is valuable in low-flow states and when optimal Doppler angles cannot be obtained. Longer segments of vessels and more individual vessels can be visualized with power Doppler US than with conventional color Doppler sonography. Power Doppler sonography increases diagnostic confidence when verifying or excluding testicular or ovarian torsion and confirming thrombosis or occlusion of vessels. Power Doppler sonography also improves evaluation of parenchymal flow and decreases examination times in technically challenging cases. Power Doppler US is a useful adjunct to mean-frequency color Doppler sonography, especially when color Doppler US cannot adequately obtain or display diagnostic information. PMID:9084086

  8. Multifocus color image fusion based on quaternion curvelet transform.

    PubMed

    Guo, Liqiang; Dai, Ming; Zhu, Ming

    2012-08-13

    Multifocus color image fusion is an active research area in image processing, and many fusion algorithms have been developed. However, the existing techniques can hardly deal with the problem of image blur. This study present a novel fusion approach that integrates the quaternion with traditional curvelet transform to overcome the above disadvantage. The proposed method uses a multiresolution analysis procedure based on the quaternion curvelet transform. Experimental results show that the proposed method is promising, and it does significantly improve the fusion quality compared to the existing fusion methods. PMID:23038524

  9. Microscale halftone color image analysis: perspective of spectral color prediction modeling

    NASA Astrophysics Data System (ADS)

    Rahaman, G. M. Atiqur; Norberg, Ole; Edström, Per

    2014-01-01

    A method has been proposed, whereby k-means clustering technique is applied to segment microscale single color halftone image into three components—solid ink, ink/paper mixed area and unprinted paper. The method has been evaluated using impact (offset) and non-impact (electro-photography) based single color prints halftoned by amplitude modulation (AM) and frequency modulation (FM) technique. The print samples have also included a range of variations in paper substrates. The colors of segmented regions have been analyzed in CIELAB color space to reveal the variations, in particular those present in mixed regions. The statistics of intensity distribution in the segmented areas have been utilized to derive expressions that can be used to calculate simple thresholds. However, the segmented results have been employed to study dot gain in comparison with traditional estimation technique using Murray-Davies formula. The performance of halftone reflectance prediction by spectral Murray-Davies model has been reported using estimated and measured parameters. Finally, a general idea has been proposed to expand the classical Murray-Davies model based on experimetal observations. Hence, the present study primarily presents the outcome of experimental efforts to characterize halftone print media interactions in respect to the color prediction models. Currently, most regression-based color prediction models rely on mathematical optimization to estimate the parameters using measured average reflectance of a large area compared to the dot size. While this general approach has been accepted as a useful tool, experimental investigations can enhance understanding of the physical processes and facilitate exploration of new modeling strategies. Furthermore, reported findings may help reduce the required number of samples that are printed and measured in the process of multichannel printer characterization and calibration.

  10. Munsell color analysis of Landsat color-ratio-composite images of limonitic areas in southwest New Mexico

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1985-01-01

    The causes of color variations in the green areas on Landsat 4/5-4/6-6/7 (red-blue-green) color-ratio-composite (CRC) images, defined as limonitic areas, were investigated by analyzing the CRC images of the Lordsburg, New Mexico area. The red-blue-green additive color system was mathematically transformed into the cylindrical Munsell color coordinates (hue, saturation, and value), and selected areas were digitally analyzed for color variation. The obtained precise color characteristics were then correlated with properties of surface material. The amount of limonite (L) visible to the sensor was found to be the primary cause of the observed color differences. The visible L is, is turn, affected by the amount of L on the material's surface and by within-pixel mixing of limonitic and nonlimonitic materials. The secondary cause of variation was vegetation density, which shifted CRC hues towards yellow-green, decreased saturation, and increased value.

  11. Color calibration of a CMOS digital camera for mobile imaging

    NASA Astrophysics Data System (ADS)

    Eliasson, Henrik

    2010-01-01

    As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g., elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration, the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the same time, the demands for efficiency in the production environment require the calibration to be as simple as possible. Thus it is important to find the correct balance between image quality and production efficiency requirements. The purpose of this work is to investigate camera color variations using a simple model where the sensor and IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65 and F11. The sensor variations were determined from a very large population from which 6 corner samples were picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting images generated by the model were then analyzed in the CIELAB space and color errors were calculated using the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that the color temperature dependence is small enough to justify the use of only one light source in a production environment.

  12. Effects of chromatic image statistics on illumination induced color differences.

    PubMed

    Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels

    2013-09-01

    We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene. PMID:24323269

  13. Availability of color calibration for consistent color display in medical images and optimization of reference brightness for clinical use

    NASA Astrophysics Data System (ADS)

    Iwai, Daiki; Suganami, Haruka; Hosoba, Minoru; Ohno, Kazuko; Emoto, Yutaka; Tabata, Yoshito; Matsui, Norihisa

    2013-03-01

    Color image consistency has not been accomplished yet except the Digital Imaging and Communication in Medicine (DICOM) Supplement 100 for implementing a color reproduction pipeline and device independent color spaces. Thus, most healthcare enterprises could not check monitor degradation routinely. To ensure color consistency in medical color imaging, monitor color calibration should be introduced. Using simple color calibration device . chromaticity of colors including typical color (Red, Green, Blue, Green and White) are measured as device independent profile connection space value called u'v' before and after calibration. In addition, clinical color images are displayed and visual differences are observed. In color calibration, monitor brightness level has to be set to quite lower value 80 cd/m2 according to sRGB standard. As Maximum brightness of most color monitors available currently for medical use have much higher brightness than 80 cd/m2, it is not seemed to be appropriate to use 80 cd/m2 level for calibration. Therefore, we propose that new brightness standard should be introduced while maintaining the color representation in clinical use. To evaluate effects of brightness to chromaticity experimentally, brightness level is changed in two monitors from 80 to 270cd/m2 and chromaticity value are compared with each brightness levels. As a result, there are no significant differences in chromaticity diagram when brightness levels are changed. In conclusion, chromaticity is close to theoretical value after color calibration. Moreover, chromaticity isn't moved when brightness is changed. The results indicate optimized reference brightness level for clinical use could be set at high brightness in current monitors .

  14. Autonomous ship classification using synthetic and real color images

    NASA Astrophysics Data System (ADS)

    Kumlu, Deniz; Jenkins, B. Keith

    2013-03-01

    This work classifies color images of ships attained using cameras mounted on ships and in harbors. Our data-sets contain 9 different types of ship with 18 different perspectives for our training set, development set and testing set. The training data-set contains modeled synthetic images; development and testing data-sets contain real images. The database of real images was gathered from the internet, and 3D models for synthetic images were imported from Google 3D Warehouse. A key goal in this work is to use synthetic images to increase overall classification accuracy. We present a novel approach for autonomous segmentation and feature extraction for this problem. Support vector machine is used for multi-class classification. This work reports three experimental results for multi-class ship classification problem. First experiment trains on a synthetic image data-set and tests on a real image data-set, and obtained accuracy is 87.8%. Second experiment trains on a real image data-set and tests on a separate real image data-set, and obtained accuracy is 87.8%. Last experiment trains on real + synthetic image data-sets (combined data-set) and tests on a separate real image data-set, and obtained accuracy is 93.3%.

  15. Colored coded-apertures for spectral image unmixing

    NASA Astrophysics Data System (ADS)

    Vargas, Hector M.; Arguello Fuentes, Henry

    2015-10-01

    Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.

  16. Online monitoring of red meat color using hyperspectral imaging.

    PubMed

    Kamruzzaman, Mohammed; Makino, Yoshio; Oshita, Seiichi

    2016-06-01

    A hyperspectral imaging system in the spectral range of 400-1000 nm was tested to develop an online monitoring system for red meat (beef, lamb, and pork) color in the meat industry. Instead of selecting different sets of important wavelengths for beef, lamb, and pork, a set of feature wavelengths were selected using the successive projection algorithm for red meat colors (L*, a*, b) for convenient industrial application. Only six wavelengths (450, 460, 600, 620, 820, and 980 nm) were further chosen as predictive feature wavelengths for predicting L*, a*, and b* in red meat. Multiple linear regression models were then developed and predicted L*, a*, and b* with coefficients of determination (R(2)p) of 0.97, 0.84, and 0.82, and root mean square error of prediction of 1.72, 1.73, and 1.35, respectively. Finally, distribution maps of meat surface color were generated. The results indicated that hyperspectral imaging has the potential to be used for rapid assessment of meat color. PMID:26874594

  17. Three-dimensional color image processing procedures using DSP

    NASA Astrophysics Data System (ADS)

    Rosales, Alberto J.; Ponomaryov, Volodymyr I.; Gallegos-Funes, Francisco

    2007-02-01

    Processing of the vector image information is seemed very important because multichannel sensors used in different applications. We introduce novel algorithms to process color images that are based on order statistics and vectorial processing techniques: Video Adaptive Vector Directional (VAVDF) and the Vector Median M-type K-Nearest Neighbour (VMMKNN) Filters presented in this paper. It has been demonstrated that novel algorithms suppress effectively an impulsive noise in comparison with different other methods in 3D video color sequences. Simulation results have been obtained using video sequences "Miss America" and "Flowers", which were corrupted by noise. The filters: KNNF, VGVDF, VMMKNN, and, finally the proposed VAVDATM have been investigated. The criteria PSNR, MAE and NCD demonstrate that the VAVDATM filter has shown the best performances in each a criterion when intensity of noise is more that 7-10%. An attempt to realize the real-time processing on the DSP is presented for median type algorithms techniques.

  18. False color image of Safsaf Oasis in southern Egypt

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color image of the uninhabited Safsaf Oasis in southern Egypt near the Egypt/Sudan border. It was produced from data obtained from the L-band and C-band radars that are part of the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar onboard the Shuttle Endeavour on April 9, 1994. The image is centered at 22 degrees North latitude, 29 degrees East longitude. It shows detailed structures of bedrock, and the dark blue sinuous lines are braided channels that occupy part of an old broad river valley. Virtually everything visible on this radar composite image cannot be seen either when standing on the ground or when viewing photographs or satellite images such as Landsat. The Jet Propulsion Laboratory alternative photo number is P-43920.

  19. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities. PMID:25991287

  20. Joint high dynamic range imaging and color demosaicing

    NASA Astrophysics Data System (ADS)

    Herwig, Johannes; Pauli, Josef

    2011-11-01

    A non-parametric high dynamic range (HDR) fusion approach is proposed that works on raw images of single-sensor color imaging devices which incorporate the Bayer pattern. Thereby the non-linear opto-electronic conversion function (OECF) is recovered before color demosaicing, so that interpolation artifacts do not aect the photometric calibration. Graph-based segmentation greedily clusters the exposure set into regions of roughly constant radiance in order to regularize the OECF estimation. The segmentation works on Gaussian-blurred sensor images, whereby the articial gray value edges caused by the Bayer pattern are smoothed away. With the OECF known the 32-bit HDR radiance map is reconstructed by weighted summation from the dierently exposed raw sensor images. Because the radiance map contains lower sensor noise than the individual images, it is nally demosaiced by weighted bilinear interpolation which prevents the interpolation across edges. Here, the previous segmentation results from the photometric calibration are utilized. After demosaicing, tone mapping is applied, whereby remaining interpolation artifacts are further damped due to the coarser tonal quantization of the resulting image.

  1. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  2. Quaternion structural similarity: a new quality index for color images.

    PubMed

    Kolaman, Amir; Yadid-Pecht, Orly

    2012-04-01

    One of the most important issues for researchers developing image processing algorithms is image quality. Methodical quality evaluation, by showing images to several human observers, is slow, expensive, and highly subjective. On the other hand, a visual quality matrix (VQM) is a fast, cheap, and objective tool for evaluating image quality. Although most VQMs are good in predicting the quality of an image degraded by a single degradation, they poorly perform for a combination of two degradations. An example for such degradation is the color crosstalk (CTK) effect, which introduces blur with desaturation. CTK is expected to become a bigger issue in image quality as the industry moves toward smaller sensors. In this paper, we will develop a VQM that will be able to better evaluate the quality of an image degraded by a combined blur/desaturation degradation and perform as well as other VQMs on single degradations such as blur, compression, and noise. We show why standard scalar techniques are insufficient to measure a combined blur/desaturation degradation and explain why a vectorial approach is better suited. We introduce quaternion image processing (QIP), which is a true vectorial approach and has many uses in the fields of physics and engineering. Our new VQM is a vectorial expansion of structure similarity using QIP, which gave it its name-Quaternion Structural SIMilarity (QSSIM). We built a new database of a combined blur/desaturation degradation and conducted a quality survey with human subjects. An extensive comparison between QSSIM and other VQMs on several image quality databases-including our new database-shows the superiority of this new approach in predicting visual quality of color images. PMID:22203713

  3. Digital image fusion systems: color imaging and low-light targets

    NASA Astrophysics Data System (ADS)

    Estrera, Joseph P.

    2009-05-01

    This paper presents digital image fusion (enhanced A+B) systems in color imaging and low light target applications. This paper will discuss first the digital sensors that are utilized in the noted image fusion applications which is a 1900x1086 (high definition format) CMOS imager coupled to a Generation III image intensifier for the visible/near infrared (NIR) digital sensor and 320x240 or 640x480 uncooled microbolometer thermal imager for the long wavelength infrared (LWIR) digital sensor. Performance metrics for these digital imaging sensors will be presented. The digital image fusion (enhanced A+B) process will be presented in context of early fused night vision systems such as the digital image fused system (DIFS) and the digital enhanced night vision goggle and later, the long range digitally fused night vision sighting system. Next, this paper will discuss the effects of user display color in a dual color digital image fusion system. Dual color image fusion schemes such as Green/Red, Cyan/Yellow, and White/Blue for image intensifier and thermal infrared sensor color representation, respectively, are discussed. Finally, this paper will present digitally fused imagery and image analysis of long distance targets in low light from these digital fused systems. The result of this image analysis with enhanced A+B digital image fusion systems is that maximum contrast and spatial resolution is achieved in a digital fusion mode as compared to individual sensor modalities in low light, long distance imaging applications. Paper has been cleared by DoD/OSR for Public Release under Ref: 08-S-2183 on August 8, 2008.

  4. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  5. Optimized mean shift algorithm for color segmentation in image sequences

    NASA Astrophysics Data System (ADS)

    Bailer, Werner; Schallauer, Peter; Haraldsson, Harald B.; Rehatschek, Herwig

    2005-03-01

    The application of the mean shift algorithm to color image segmentation has been proposed in 1997 by Comaniciu and Meer. We apply the mean shift color segmentation to image sequences, as the first step of a moving object segmentation algorithm. Previous work has shown that it is well suited for this task, because it provides better temporal stability of the segmentation result than other approaches. The drawback is higher computational cost. For speed up of processing on image sequences we exploit the fact that subsequent frames are similar and use the cluster centers of previous frames as initial estimates, which also enhances spatial segmentation continuity. In contrast to other implementations we use the originally proposed CIE LUV color space to ensure high quality segmentation results. We show that moderate quantization of the input data before conversion to CIE LUV has little influence on the segmentation quality but results in significant speed up. We also propose changes in the post-processing step to increase the temporal stability of border pixels. We perform objective evaluation of the segmentation results to compare the original algorithm with our modified version. We show that our optimized algorithm reduces processing time and increases the temporal stability of the segmentation.

  6. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  7. Validation of tablet-based evaluation of color fundus images

    PubMed Central

    Christopher, Mark; Moga, Daniela C.; Russell, Stephen R.; Folk, James C.; Scheetz, Todd; Abràmoff, Michael D.

    2012-01-01

    Purpose To compare diabetic retinopathy (DR) referral recommendations made by viewing fundus images using a tablet computer to recommendations made using a standard desktop display. Methods A tablet computer (iPad) and a desktop PC with a high-definition color display were compared. For each platform, two retinal specialists independently rated 1200 color fundus images from patients at risk for DR using an annotation program, Truthseeker. The specialists determined whether each image had referable DR, and also how urgently each patient should be referred for medical examination. Graders viewed and rated the randomly presented images independently and were masked to their ratings on the alternative platform. Tablet- and desktop display-based referral ratings were compared using cross-platform, intra-observer kappa as the primary outcome measure. Additionally, inter-observer kappa, sensitivity, specificity, and area under ROC (AUC) were determined. Results A high level of cross-platform, intra-observer agreement was found for the DR referral ratings between the platforms (κ=0.778), and for the two graders, (κ=0.812). Inter-observer agreement was similar for the two platforms (κ=0.544 and κ=0.625 for tablet and desktop, respectively). The tablet-based ratings achieved a sensitivity of 0.848, a specificity of 0.987, and an AUC of 0.950 compared to desktop display-based ratings. Conclusions In this pilot study, tablet-based rating of color fundus images for subjects at risk for DR was consistent with desktop display-based rating. These results indicate that tablet computers can be reliably used for clinical evaluation of fundus images for DR. PMID:22495326

  8. Local Skin Warming Enhances Color Duplex Imaging of Cutaneous Perforators.

    PubMed

    Li, Haizhou; Du, Zijing; Xie, Feng; Zan, Tao; Li, QingFeng

    2015-07-01

    The perforator flap is one of the most useful techniques in reconstructive surgery. The operative procedure for these flaps will be greatly simplified if accurate localization of the course of the perforator can be preoperatively confirmed. However, small vessels with diameters less than 0.5 mm cannot be readily traced with conventional imaging techniques. Local skin warming temporarily increases cutaneous blood flow and vasodilation. In this study, we established a local skin warming procedure, and performed this before color duplex imaging to improve preoperative perforator mapping and enable precise flap design. PMID:23903089

  9. PCIF: An Algorithm for Lossless True Color Image Compression

    NASA Astrophysics Data System (ADS)

    Barcucci, Elena; Brlek, Srecko; Brocchi, Stefano

    An efficient algorithm for compressing true color images is proposed. The technique uses a combination of simple and computationally cheap operations. The three main steps consist of predictive image filtering, decomposition of data, and data compression through the use of run length encoding, Huffman coding and grouping the values into polyominoes. The result is a practical scheme that achieves good compression while providing fast decompression. The approach has performance comparable to, and often better than, competing standards such JPEG 2000 and JPEG-LS.

  10. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  11. The Athena Pancam and Color Microscopic Imager (CMI)

    NASA Technical Reports Server (NTRS)

    Bell, J. F., III; Herkenhoff, K. E.; Schwochert, M.; Morris, R. V.; Sullivan, R.

    2000-01-01

    The Athena Mars rover payload includes two primary science-grade imagers: Pancam, a multispectral, stereo, panoramic camera system, and the Color Microscopic Imager (CMI), a multispectral and variable depth-of-field microscope. Both of these instruments will help to achieve the primary Athena science goals by providing information on the geology, mineralogy, and climate history of the landing site. In addition, Pancam provides important support for rover navigation and target selection for Athena in situ investigations. Here we describe the science goals, instrument designs, and instrument performance of the Pancam and CMI investigations.

  12. Shear wave transmissivity measurement by color Doppler shear wave imaging

    NASA Astrophysics Data System (ADS)

    Yamakoshi, Yoshiki; Yamazaki, Mayuko; Kasahara, Toshihiro; Sunaguchi, Naoki; Yuminaka, Yasushi

    2016-07-01

    Shear wave elastography is a useful method for evaluating tissue stiffness. We have proposed a novel shear wave imaging method (color Doppler shear wave imaging: CD SWI), which utilizes a signal processing unit in ultrasound color flow imaging in order to detect the shear wave wavefront in real time. Shear wave velocity is adopted to characterize tissue stiffness; however, it is difficult to measure tissue stiffness with high spatial resolution because of the artifact produced by shear wave diffraction. Spatial average processing in the image reconstruction method also degrades the spatial resolution. In this paper, we propose a novel measurement method for the shear wave transmissivity of a tissue boundary. Shear wave wavefront maps are acquired by changing the displacement amplitude of the shear wave and the transmissivity of the shear wave, which gives the difference in shear wave velocity between two mediums separated by the boundary, is measured from the ratio of two threshold voltages required to form the shear wave wavefronts in the two mediums. From this method, a high-resolution shear wave amplitude imaging method that reconstructs a tissue boundary is proposed.

  13. Content-Based Image Retrieval Using a Composite Color-Shape Approach.

    ERIC Educational Resources Information Center

    Mehtre, Babu M.; Kankanhalli, Mohan S.; Lee, Wing Foon

    1998-01-01

    Proposes a composite feature measure which combines the shape and color features of an image based on a clustering technique. A similarity measure computes the degree of match between a given pair of images; this technique can be used for content-based image retrieval of images using shape and/or color. Tests the technique on two image databases;…

  14. Data Hiding Scheme on Medical Image using Graph Coloring

    NASA Astrophysics Data System (ADS)

    Astuti, Widi; Adiwijaya; Novia Wisety, Untari

    2015-06-01

    The utilization of digital medical images is now widely spread[4]. The medical images is supposed to get protection since it has probability to pass through unsecure network. Several watermarking techniques have been developed so that the digital medical images can be guaranteed in terms of its originality. In watermarking, the medical images becomes a protected object. Nevertheless, the medical images can actually be a medium of hiding secret data such as patient medical record. The data hiding is done by inserting data into image - usually called steganography in images. Because the medical images can influence the diagnose change, steganography will only be applied to non-interest region. Vector Quantization (VQ) is one of lossydata compression technique which is sufficiently prominent and frequently used. Generally, the VQ based steganography scheme still has limitation in terms of the data capacity which can be inserted. This research is aimed to make a Vector Quantization-based steganography scheme and graph coloring. The test result shows that the scheme can insert 28768 byte data which equals to 10077 characters for images area of 3696 pixels.

  15. Automatic assessment of macular edema from color retinal images.

    PubMed

    Deepak, K Sai; Sivaswamy, Jayanthi

    2012-03-01

    Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using a rotational asymmetry metric by examining the symmetry of macular region. The performance of the proposed methodology and features are evaluated against several publicly available datasets. The detection performance has a sensitivity of 100% with specificity between 74% and 90%. Cases needing immediate referral are detected with a sensitivity of 100% and specificity of 97%. The severity classification accuracy is 81% for the moderate case and 100% for severe cases. These results establish the effectiveness of the proposed solution. PMID:22167598

  16. A perceptually tuned watermarking scheme for color images.

    PubMed

    Chou, Chun-Hsien; Liu, Kuo-Cheng

    2010-11-01

    Transparency and robustness are two conflicting requirements demanded by digital image watermarking for copyright protection and many other purposes. A feasible way to simultaneously satisfy the two conflicting requirements is to embed high-strength watermark signals in the host signals that can accommodate the distortion due to watermark insertion as part of perceptual redundancy. The search of distortion-tolerable host signals for watermark insertion and the determination of watermark strength are hence crucial to the realization of a transparent yet robust watermark. This paper presents a color image watermarking scheme that hides watermark signals in most distortion-tolerable signals within three color channels of the host image without resulting in perceivable distortion. The distortion-tolerable host signals or the signals that possess high perceptual redundancy are sought in the wavelet domain for watermark insertion. A visual model based on the CIEDE2000 color difference equation is used to measure the perceptual redundancy inherent in each wavelet coefficient of the host image. By means of quantization index modulation, binary watermark signals are embedded in qualified wavelet coefficients. To reinforce the robustness, the watermark signals are repeated and permuted before embedding, and restored by the majority-vote decision making process in watermark extraction. Original images are not required in watermark extraction. Only a small amount of information including locations of qualified coefficients and the data associated with coefficient quantization is needed for watermark extraction. Experimental results show that the embedded watermark is transparent and quite robust in face of various attacks such as cropping, low-pass filtering, scaling, media filtering, white-noise addition as well as the JPEG and JPEG2000 coding at high compression ratios. PMID:20529748

  17. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  18. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  19. Color imaging the magnetic field distribution in superconductors

    SciTech Connect

    Batalla, E.; Zwartz, E.G.; Goudreault, R.; Wright, L.S. )

    1990-08-01

    A magneto-optically active glass was used to image the magnetic field distribution in superconductors using the Faraday effect. Polarized white light illumination of the glass resulted in various colors depending on the setting of the analyzing polaroid. These colors are shown to be consistent with the known dependence of the Faraday rotation angle on the applied magnetic field, the temperature of the glass, and the wavelength of the light. This technique was used to observe field distributions in polycrystalline and single-crystal YBa{sub 2}Cu{sub 3}O{sub 7} samples. In the ceramic sample, the field was uniform within the resolution (50 {mu}m) of this technique and field magnitudes were measured with a 10% accuracy. In the single crystal, the magnetic field distribution was not uniform showing field gradients imaged as color gradients on the pictures of the glass. Contours of constant magnetic field were drawn from these photographs and from these, a critical current density of 10{sup 9} A/m{sup 2} was deduced in an external field of 136 mT.

  20. False-color composite image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color composite of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. The look angle of the radar is 30 degrees and the size of the image is approximately 20 kilometers by 50 kilometers (12 by 30 miles). Most of the dark areas in the image are the ice-covered lakes in the region. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of Highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake

  1. Three frequency false-color image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency, false color image of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. It was produced using data from the X-band, C-band and L-band radars that comprise the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). SIR-C/X-SAR acquired this image on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. Most of the dark blue areas in the image are the ice covered lakes. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are shown by light

  2. Edge-supressed color clustering for image thresholding

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Uijt de Haag, Maarten

    2000-03-01

    This paper discusses the development of an iterative algorithm for fully automatic (gross or fine) segmentation of color images. The basic idea here is to automate segmentation for on-line operations. This is needed for such critical applications as internet communication, video indexing, target tracking, visual guidance, remote control, and motion detection. The method is composed of an edge-suppressed clustering (learning) and principal component thresholding (classification) step. In the learning phase, image clusters are well formed in the (R,G,B) space by considering only the non-edge points. The unknown number (N) of mutually exclusive image segments is learned in an unsupervised operation mode developed based on the cluster fidelity measure and K-means algorithm. The classification phase is a correlation-based segmentation strategy that operates in the K-L transform domain using the Otsu thresholding principal. It is demonstrated experimentally that the method is effective and efficient for color images of natural scenes with irregular textures and objects of varying sizes and dimension.

  3. Extending the depth-of-field for microscopic imaging by means of multifocus color image fusion

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, R.; Toxqui-Quitl, C.; Padilla-Vivanco, A.; Ortega-Mendoza, G.

    2015-09-01

    In microscopy, the depth of field (DOF) is limited by the physical characteristics of imaging systems. Imaging a scene with the all the field of view in focus can be an impossible task to achieve. In this paper, metal samples are inspected on multiple focal planes by moving the microscope stage along the z - axis and for each z plane, an image is digitalized. Through digital image processing, an image with all the focused regions is generated from a set of multi focus images. The proposed fusion algorithm gives a single sharp image. The merger scheme is simple, fast and virtually free of artifacts or false color. Experimental fusion results are shown.

  4. Butterfly wing coloration studied with a novel imaging scatterometer

    NASA Astrophysics Data System (ADS)

    Stavenga, Doekele

    2010-03-01

    Animal coloration functions for display or camouflage. Notably insects provide numerous examples of a rich variety of the applied optical mechanisms. For instance, many butterflies feature a distinct dichromatism, that is, the wing coloration of the male and the female differ substantially. The male Brimstone, Gonepteryx rhamni, has yellow wings that are strongly UV iridescent, but the female has white wings with low reflectance in the UV and a high reflectance in the visible wavelength range. In the Small White cabbage butterfly, Pieris rapae crucivora, the wing reflectance of the male is low in the UV and high at visible wavelengths, whereas the wing reflectance of the female is higher in the UV and lower in the visible. Pierid butterflies apply nanosized, strongly scattering beads to achieve their bright coloration. The male Pipevine Swallowtail butterfly, Battus philenor, has dorsal wings with scales functioning as thin film gratings that exhibit polarized iridescence; the dorsal wings of the female are matte black. The polarized iridescence probably functions in intraspecific, sexual signaling, as has been demonstrated in Heliconius butterflies. An example of camouflage is the Green Hairstreak butterfly, Callophrys rubi, where photonic crystal domains exist in the ventral wing scales, resulting in a matte green color that well matches the color of plant leaves. The spectral reflection and polarization characteristics of biological tissues can be rapidly and with unprecedented detail assessed with a novel imaging scatterometer-spectrophotometer, built around an elliptical mirror [1]. Examples of butterfly and damselfly wings, bird feathers, and beetle cuticle will be presented. [4pt] [1] D.G. Stavenga, H.L. Leertouwer, P. Pirih, M.F. Wehling, Optics Express 17, 193-202 (2009)

  5. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  6. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, C.‰line; Gosselin, Bernard

    2005-01-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  7. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, Céline; Gosselin, Bernard

    2004-12-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  8. Automated rice leaf disease detection using color image analysis

    NASA Astrophysics Data System (ADS)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  9. From printed color to image appearance: tool for advertising assessment

    NASA Astrophysics Data System (ADS)

    Bonanomi, Cristian; Marini, Daniele; Rizzi, Alessandro

    2012-07-01

    We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.

  10. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    SciTech Connect

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  11. Characterizing pigments with hyperspectral imaging variable false-color composites

    NASA Astrophysics Data System (ADS)

    Hayem-Ghez, Anita; Ravaud, Elisabeth; Boust, Clotilde; Bastian, Gilles; Menu, Michel; Brodie-Linder, Nancy

    2015-11-01

    Hyperspectral imaging has been used for pigment characterization on paintings for the last 10 years. It is a noninvasive technique, which mixes the power of spectrophotometry and that of imaging technologies. We have access to a visible and near-infrared hyperspectral camera, ranging from 400 to 1000 nm in 80-160 spectral bands. In order to treat the large amount of data that this imaging technique generates, one can use statistical tools such as principal component analysis (PCA). To conduct the characterization of pigments, researchers mostly use PCA, convex geometry algorithms and the comparison of resulting clusters to database spectra with a specific tolerance (like the Spectral Angle Mapper tool on the dedicated software ENVI). Our approach originates from false-color photography and aims at providing a simple tool to identify pigments thanks to imaging spectroscopy. It can be considered as a quick first analysis to see the principal pigments of a painting, before using a more complete multivariate statistical tool. We study pigment spectra, for each kind of hue (blue, green, red and yellow) to identify the wavelength maximizing spectral differences. The case of red pigments is most interesting because our methodology can discriminate the red pigments very well—even red lakes, which are always difficult to identify. As for the yellow and blue categories, it represents a good progress of IRFC photography for pigment discrimination. We apply our methodology to study the pigments on a painting by Eustache Le Sueur, a French painter of the seventeenth century. We compare the results to other noninvasive analysis like X-ray fluorescence and optical microscopy. Finally, we draw conclusions about the advantages and limits of the variable false-color image method using hyperspectral imaging.

  12. Structure of mouse spleen investigated by 7-color fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Tsurui, Hiromichi; Niwa, Shinichirou; Hirose, Sachiko; Okumura, Ko; Shirai, Toshikazu

    2001-07-01

    Multi-color fluorescence imaging of tissue samples has been an urgent requirement in current biology. As far as fluorescence signals should be isolated by optical bandpass filter-sets, rareness of the combination of chromophores with little spectral overlap has hampered to satisfy this demand. Additivity of signals in a fluorescence image accepts applying linear unmixing of superposed spectra based on singular value decomposition, hence complete separation of the fluorescence signals fairly overlapping each other. We have developed 7-color fluorescence imaging based on this principle and applied the method to the investigation of mouse spleen. Not only rough structural features in a spleen such as red pulp, marginal zone, and white pulp, but also fine structures of them, periarteriolar lymphocyte sheath (PALS), follicle, and germinal center were clearly pictured simultaneously. The distributions of subsets of dendritic cells (DC) and macrophages (M(phi) ) markers such as BM8, F4/80, MOMA2 and Mac3 around the marginal zone were imagined simultaneously. Their inhomogeneous expressions were clearly demonstrated. These results show the usefulness of the method in the study of the structure that consists of many kinds of cells and in the identification of cells characterized by multiple markers.

  13. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works. PMID:26759756

  14. SRTM Radar Image with Color as Height: Kachchh, Gujarat, India

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image shows the area around the January 26, 2001, earthquake in western India, the deadliest in the country's history with some 20,000 fatalities. The epicenter of the magnitude 7.6 earthquake was just to the left of the center of the image. The Gulf of Kachchh (or Kutch) is the black area running from the lower left corner towards the center of the image. The city of Bhuj is in the yellow-toned area among the brown hills left of the image center and is the historical capital of the Kachchh region. Bhuj and many other towns and cities nearby were almost completely destroyed by the shaking of the earthquake. These hills reach up to 500 meters (1,500 feet) elevation. The city of Ahmedabad, capital of Gujarat state, is the radar-bright area next to the right side of the image. Several buildings in Ahmedabad were also destroyed by the earthquake. The dark blue areas around the center of the image and extending to the left side are low-lying salt flats called the Rann of Kachchh with the Little Rann just to the right of the image center. The bumpy area north of the Rann (green and yellow colors) is a large area of sand dunes in Pakistan. A branch of the Indus River used to flow through the area on the left side of this image, but it was diverted by a previous large earthquake that struck this area in 1819.

    The annotated version of the image includes a 'beachball' that shows the location and slip direction of the January 26, 2001, earthquake from the Harvard Quick CMT catalog: http://www.seismology.harvard.edu/CMTsearch.html. [figure removed for brevity, see original site]

    This image combines two types of data from the Shuttle Radar Topography Mission (SRTM). The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to brown and white at the highest elevations. This image is a mosaic of four SRTM swaths.

    This image

  15. Los Angeles, California, Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationships of the dense urban development of Los Angeles and the natural contours of the land. The image includes the Pacific Ocean on the left, the flat Los Angeles Basin across the center, and the steep ranges of the Santa Monica and Verdugo mountains along the top. The two dark strips near the coast at lower left are the runways of Los Angeles International Airport. Downtown Los Angeles is the bright yellow and pink area at lower center. Pasadena, including the Rose Bowl, are seen half way down the right edge of the image. The communities of Glendale and Burbank, including the Burbank Airport, are seen at the center of the top edge of the image. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities such as Leangles.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between

  16. Quaternionic Local Ranking Binary Pattern: A Local Descriptor of Color Images.

    PubMed

    Lan, Rushi; Zhou, Yicong; Tang, Yuan Yan

    2016-02-01

    This paper proposes a local descriptor called quaternionic local ranking binary pattern (QLRBP) for color images. Different from traditional descriptors that are extracted from each color channel separately or from vector representations, QLRBP works on the quaternionic representation (QR) of the color image that encodes a color pixel using a quaternion. QLRBP is able to handle all color channels directly in the quaternionic domain and include their relations simultaneously. Applying a Clifford translation to QR of the color image, QLRBP uses a reference quaternion to rank QRs of two color pixels, and performs a local binary coding on the phase of the transformed result to generate local descriptors of the color image. Experiments demonstrate that the QLRBP outperforms several state-of-the-art methods. PMID:26672041

  17. A 10 MS/s 8-bit charge-redistribution ADC for hybrid pixel applications in 65 m CMOS

    NASA Astrophysics Data System (ADS)

    Kishishita, Tetsuichi; Hemperek, Tomasz; Krüger, Hans; Koch, Manuel; Germic, Leonard; Wermes, Norbert

    2013-12-01

    The design and measurement results of an 8-bit SAR ADC, based on a charge-redistribution DAC, are presented. This ADC is characterized by superior power efficiency and small area, realized by employing a lateral metal-metal capacitor array and a dynamic two-stage comparator. To avoid the need for a high-speed clock and its associated power consumption, an asynchronous logic was implemented in a logic control cell. A test chip has been developed in a 65 nm CMOS technology, including eight ADC channels with different layout flavors of the capacitor array, a transimpedance amplifier as a signal input structure, a serializer, and a custom-made LVDS driver for data transmission. The integral (INL) and differential (DNL) nonlinearities are measured below 0.5 LSB and 0.8 LSB, respectively, for the best channel operating at a sampling frequency of 10 MS/s. The area occupies 40 μm×70 μm for one ADC channel. The power consumption is estimated as 4 μW at 1 MS/s and 38 μW at 10 MS/s with a supply rail of 1.2 V. These excellent performance features and the natural radiation hardness of the design, due to the thin gate oxide thickness of transistors, are very interesting for front-end electronics ICs of future hybrid-pixel detector systems.

  18. Low Temperature Testing of a Radiation Hardened CMOS 8-Bit Flash Analog-to-Digital (A/D) Converter

    NASA Technical Reports Server (NTRS)

    Gerber, Scott S.; Hammond, Ahmad; Elbuluk, Malik E.; Patterson, Richard L.; Overton, Eric; Ghaffarian, Reza; Ramesham, Rajeshuni; Agarwal, Shri G.

    2001-01-01

    Power processing electronic systems, data acquiring probes, and signal conditioning circuits are required to operate reliably under harsh environments in many of NASA:s missions. The environment of the space mission as well as the operational requirements of some of the electronic systems, such as infrared-based satellite or telescopic observation stations where cryogenics are involved, dictate the utilization of electronics that can operate efficiently and reliably at low temperatures. In this work, radiation-hard CMOS 8-bit flash A/D converters were characterized in terms of voltage conversion and offset in the temperature range of +25 to -190 C. Static and dynamic supply currents, ladder resistance, and gain and offset errors were also obtained in the temperature range of +125 to -190 C. The effect of thermal cycling on these properties for a total of ten cycles between +80 and - 150 C was also determined. The experimental procedure along with the data obtained are reported and discussed in this paper.

  19. A 5 Giga Samples Per Second 8-Bit Analog to Digital Printed Circuit Board for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Jiang, Homin; Liu, Howard; Guzzino, Kim; Kubo, Derek; Li, Chao-Te; Chang, Ray; Chen, Ming-Tang

    2014-08-01

    We have designed, manufactured, and characterized an 8-bit 5 Giga samples per second (Gsps) ADC printed circuit board assembly (PCBA). An e2v EV8AQ160 ADC chip was used in the design and the board is plug compatible with the field programmable gate array (FPGA) board developed by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) community. Astronomical interference fringes were demonstrated across a single baseline pair of antennas using two ADC boards on the Yuan Tseh Lee Array for Microwave Background Anisotropy (AMiBA) telescope. Several radio interferometers are using this board for bandwidth expansion, such as Submillimeter Array; also, several experimental telescopes are building new spectrometers using the same board. The ADC boards were attached directly to the Reconfigurable Open Architecture Computing Hardware (ROACH-2) FPGA board for processing of the digital output signals. This ADC board provides the capability of digitizing radio frequency signals from DC to 2 GHz (3 dB bandwidth), and to an extended bandwidth of 2.5 GHz (5 dB) with derated performance. The following worst-case performance parameters were obtained over 2 GHz: spur free dynamic range (SFDR) of 44 dB, signal-to-noise and distortion (SINAD) of 35 dB, and effective number of bits (ENOB) of 5.5.

  20. Three frequency false color image of Flevoland, the Netherlands

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency false color image of Flevoland, the Netherlands, centered at 52.4 degrees north latitude, 5.4 degrees east longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Shuttle Endeavour. The area shown covers an area approximately 25 kilometers by 28 kilometers. Flevoland, which fills the lower two-thirds of the image, is a very flat area that is made up of reclaimed land that is used for agriculture and forestry. At the top of the image, across the canal from Flevoland, is an older forest shown in red; the city of Harderwijk is shown in white on the shore of the canal. At this time of the year, the agricultural fields are bare soil, and they show up in this image in blue. The dark blue areas are water and the small dots in the canal are boats. The Jet Propulsion Laboratory alternative photo number is P-43941.

  1. Radar Image with Color as Height, Ancharn Kuy, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Ancharn Kuy, Cambodia, was taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR). The image depicts an area northwest of Angkor Wat. The radar has highlighted a number of circular village mounds in this region, many of which have a circular pattern of rice fields surrounding the slightly elevated site. Most of them have evidence of what seems to be pre-Angkor occupation, such as stone tools and potsherds. Most of them also have a group of five spirit posts, a pattern not found in other parts of Cambodia. The shape of the mound, the location in the midst of a ring of rice fields, the stone tools and the current practice of spirit veneration have revealed themselves through a unique 'marriage' of radar imaging, archaeological investigation, and anthropology.

    Ancharn Kuy is a small village adjacent to the road, with just this combination of features. The region gets slowly higher in elevation, something seen in the shift of color from yellow to blue as you move to the top of the image.

    The small dark rectangles are typical of the smaller water control devices employed in this area. While many of these in the center of Angkor are linked to temples of the 9th to 14th Century A.D., we cannot be sure of the construction date of these small village tanks. They may pre-date the temple complex, or they may have just been dug ten years ago!

    The image dimensions are approximately 4.75 by 4.3 kilometers (3 by 2.7 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color; that is going from blue to red to yellow to green and back to blue again; corresponds to 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif

  2. Estimation of spectral transmittance curves from RGB images in color digital holographic microscopy using speckle illuminations

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Tokuno, Yuta; Aizu, Yoshihisa

    2016-06-01

    We investigate the estimation of spectral transmittance curves in color digital holographic microscopy using speckle illuminations. In color digital holography, it has the disadvantage in that the color-composite image gives poor color information due to the use of lasers with the two or three wavelengths. To overcome this disadvantage, the Wiener estimation method and an averaging process using multiple holograms are applied to color digital holographic microscopy. Estimated spectral transmittance and color-composite images are shown to indicate the usefulness of the proposed method.

  3. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  4. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  5. Automatic sputum color image segmentation for tuberculosis diagnosis

    NASA Astrophysics Data System (ADS)

    Forero-Vargas, Manuel G.; Sierra-Ballen, Eduard L.; Alvarez-Borrego, Josue; Pech-Pacheco, Jose L.; Cristobal-Perez, Gabriel; Alcala, Luis; Desco, Manuel

    2001-11-01

    Tuberculosis (TB) and other mycobacteriosis are serious illnesses which control is mainly based on presumptive diagnosis. Besides of clinical suspicion, the diagnosis of mycobacteriosis must be done through genus specific smears of clinical specimens. However, these techniques lack of sensitivity and consequently clinicians must wait culture results as much as two months. Computer analysis of digital images from these smears could improve sensitivity of the test and, moreover, decrease workload of the micobacteriologist. Bacteria segmentation of particular species entails a complex process. Bacteria shape is not enough as a discriminant feature, because there are many species that share the same shape. Therefore the segmentation procedure requires to be improved using the color image information. In this paper we present two segmentation procedures based on fuzzy rules and phase-only correlation techniques respectively that will provide the basis of a future automatic particle' screening.

  6. Color image segmentation using vector angle-based region growing

    NASA Astrophysics Data System (ADS)

    Wesolkowski, Slawo; Fieguth, Paul W.

    2002-06-01

    A new region growing color image segmentation algorithm is presented in this paper. This algorithm is invariant to highlights and shading. This is accomplished in two steps. First, the average pixel intensity is removed from each RGB coordinate. This transformation mitigates the effects of highlights. Next, region seeds are obtained using the Mixture of Principal Components algorithm. Each region is characterized using two parameters. The first is the distance between the region prototype and the candidate pixel. The second is the distance between the candidate pixel and its nearest neighbor in the region. The inner vector product or vector angle is used as the similarity measure which makes both of these measures shading invariant. Results on a real image illustrate the effectiveness of the method.

  7. Optical color-image encryption in the diffractive-imaging scheme

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Pan, Qunna; Gong, Qiong

    2016-02-01

    By introducing the theta modulation technique into the diffractive-imaging-based optical scheme, we propose a novel approach for color image encryption. For encryption, a color image is divided into three channels, i.e., red, green and blue, and thereafter these components are appended by redundant data before being sent to the encryption scheme. The carefully designed optical setup, which comprises of three 4f optical architectures and a diffractive-imaging-based optical scheme, could encode the three plaintexts into a single noise-like intensity pattern. For the decryption, an iterative phase retrieval algorithm, together with a filter operation, is applied to extract the primary color images from the diffraction intensity map. Compared with previous methods, our proposal has successfully encrypted a color rather than grayscale image into a single intensity pattern, as a result of which the capacity and practicability have been remarkably enhanced. In addition, the performance and the security of it are also investigated. The validity as well as feasibility of the proposed method is supported by numerical simulations.

  8. Honolulu, Hawaii Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the city of Honolulu, Hawaii and adjacent areas on the island of Oahu. Honolulu lies on the south shore of the island, right of center of the image. Just below the center is Pearl Harbor, marked by several inlets and bays. Runways of the airport can be seen to the right of Pearl Harbor. Diamond Head, an extinct volcanic crater, is a blue circle along the coast right of center. The Koolau mountain range runs through the center of the image. The steep cliffs on the north side of the range are thought to be remnants of massive landslides that ripped apart the volcanic mountains that built the island thousands of years ago. On the north shore of the island are the Mokapu Peninsula and Kaneohe Bay. High resolution topographic data allow ecologists and planners to assess the effects of urban development on the sensitive ecosystems in tropical regions.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA

  9. San Gabriel Mountains, California, Radar image, color as height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationship of the urban area of Pasadena, California to the natural contours of the land. The image includes the alluvial plain on which Pasadena and the Jet Propulsion Laboratory sit, and the steep range of the San Gabriel Mountains. The mountain front and the arcuate valley running from upper left to the lower right are active fault zones, along which the mountains are rising. The chaparral-covered slopes above Pasadena are also a prime area for wildfires and mudslides. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities along the front of active mountain ranges.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to white at the highest elevations. This image contains about 2300 meters (7500 feet) of total relief. White speckles on the face of some of the mountains are holes in the data caused by steep terrain. These will be filled using coverage from an intersecting pass.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  10. Radar image with color as height, Bahia State, Brazil

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This radar image is the first to show the full 240-kilometer-wide (150 mile)swath collected by the Shuttle Radar Topography Mission (SRTM). The area shown is in the state of Bahia in Brazil. The semi-circular mountains along the leftside of the image are the Serra Da Jacobin, which rise to 1100 meters (3600 feet) above sea level. The total relief shown is approximately 800 meters (2600 feet). The top part of the image is the Sertao, a semi-arid region, that is subject to severe droughts during El Nino events. A small portion of the San Francisco River, the longest river (1609 kilometers or 1000 miles) entirely within Brazil, cuts across the upper right corner of the image. This river is a major source of water for irrigation and hydroelectric power. Mapping such regions will allow scientists to better understand the relationships between flooding cycles, drought and human influences on ecosystems.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. The three dark vertical stripes show the boundaries where four segments of the swath are merged to form the full scanned swath. These will be removed in later processing. Colors range from green at the lowest elevations to reddish at the highest elevations.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space

  11. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    NASA Astrophysics Data System (ADS)

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  12. A visible/infrared gray image fusion algorithm based on the YUV color transformation

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Jiakun; Li, Li

    2012-11-01

    Color fusion technology, one of the typical technologies, has been emphasized all over the world. Multiband images are fused into a color image. Some effective visible and thermal infrared color fusion algorithms have been proposed now. We have successfully run a real-time natural sense of visible/infrared color fusion algorithm in DSP and FPGA hardware processing platforms. However, according to different needs, gray image fusion technology has its own unique applications. Based on the natural sense of color image fusion algorithm of the visible and infrared, we have proposed a visible / infrared gray image fusion algorithm. Firstly we do a YUV color fusion. Then we output the brightness of the fusion as gray fusion images. This algorithm for image fusion is compared with typical fusion algorithms: the weighted average, the Laplace Pyramid and the Haar basis wavelet. Several objective evaluation indicators are selected. The results of objective and subjective comparison show that the algorithm has most advantages. It shows that multiband gray image fusion in the color space is available. The algorithm is implemented on a DSP hardware image processing platform real-time with the TI's chip as the kernel processor. It makes natural sense of color fusion and gray fusion for visible light (low level light) and thermal imaging integrated. Users are convenient to choose model of the natural sense of color fusion or gray fusion for real-time video imaging output

  13. Color constancy using 3D scene geometry derived from a single image.

    PubMed

    Elfiky, Noha; Gevers, Theo; Gijsenij, Arjan; Gonzalez, Jordi

    2014-09-01

    The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g., gray-world and white patch assumption). In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions (depth/layer) found in images. The aim is to classify images into stages (rough 3D geometry models). According to stage models, images are divided into stage regions using hard and soft segmentation. After that, the best color constancy methods are selected for each geometry depth. To this end, we propose a method to combine color constancy algorithms by investigating the relation between depth, local image statistics, and color constancy. Image statistics are then exploited per depth to select the proper color constancy method. Our approach opens the possibility to estimate multiple illuminations by distinguishing nearby light source from distant illuminations. Experiments on state-of-the-art data sets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 50% of median angular error. When using a perfect classifier (i.e, all of the test images are correctly classified into stages); the performance of the proposed method achieves an improvement of 52% of the median angular error compared with the best-performing single color constancy algorithm. PMID:25051548

  14. Comparison of Color Model in Cotton Image Under Conditions of Natural Light

    NASA Astrophysics Data System (ADS)

    Zhang, J. H.; Kong, F. T.; Wu, J. Z.; Wang, S. W.; Liu, J. J.; Zhao, P.

    Although the color images contain a large amount of information reflecting the species characteristics, different color models also get different information. The selection of color models is the key to separating crops from background effectively and rapidly. Taking the cotton images collected under natural light as the object, we convert the color components of RGB color model, HSL color model and YIQ color model respectively. Then, we use subjective evaluation and objective evaluation methods, evaluating the 9 color components of conversion. It is concluded that the Q component of the soil, straw and plastic film region gray values remain the same without larger fluctuation when using subjective evaluation method. In the objective evaluation, we use the variance method, average gradient method, gray prediction objective evaluation error statistics method and information entropy method respectively to find the minimum numerical of Q color component suitable for background segmentation.

  15. A quaternion-based spectral clustering method for color image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Jin, Lianghai; Liu, Hong; He, Zeng

    2011-11-01

    Spectral clustering method has been widely used in image segmentation. A key issue in spectral clustering is how to build the affinity matrix. When it is applied to color image segmentation, most of the existing methods either use Euclidean metric to define the affinity matrix, or first converting color-images into gray-level images and then use the gray-level images to construct the affinity matrix (component-wise method). However, it is known that Euclidean distances can not represent the color differences well and the component-wise method does not consider the correlation between color channels. In this paper, we propose a new method to produce the affinity matrix, in which the color images are first represented in quaternion form and then the similarities between color pixels are measured by quaternion rotation (QR) mechanism. The experimental results show the superiority of the new method.

  16. Combining color and shape information for content-based image retrieval on the Internet

    NASA Astrophysics Data System (ADS)

    Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis

    2003-12-01

    We propose a new image feature that merges color and shape information. This global feature, which we call color shape context, is a histogram that combines the spatial (shape) and color information of the image in one compact representation. This histogram codes the locality of color transitions in an image. Illumination invariant derivatives are first computed and provide the edges of the image, which is the shape information of our feature. These edges are used to obtain similarity (rigid) invariant shape descriptors. The color transitions that take place on the edges are coded in an illumination invariant way and are used as the color information. The color and shape information are combined in one multidimensional vector. The matching function of this feature is a metric and allows for existing indexing methods such as R-trees to be used for fast and efficient retrieval.

  17. The effect of different standard illumination conditions on color balance failure in offset printed images on glossy coated paper expressed by color difference

    NASA Astrophysics Data System (ADS)

    Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.

    2012-05-01

    One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.

  18. Color Mosaics and Multispectral Analyses of Mars Reconnaissance Orbit Mars Color Imager (MARCI) Observations

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Anderson, R. B.; Kressler, K.; Wolff, M. J.; Cantor, B.; Science; Operations Teams, M.

    2008-12-01

    The Mars Color Imager (MARCI) on the Mars Reconnaissance Orbiter (MRO) spacecraft is a is a wide-angle, multispectral Charge-Coupled Device (CCD) "push-frame" imaging camera designed to provide frequent, synoptic-scale imaging of Martian atmospheric and surface features and phenomena. MARCI uses a 1024x1024 pixel interline transfer CCD detector that has seven narrowband interference filters bonded directly to the CCD. Five of the filters are in the visible to short-wave near-IR wavelength range (MARCI-VIS: 437, 546, 604, 653, and 718 nm) and two are in the UV (MARCI-UV: 258 and 320 nm). During the MRO primary mission (November 2006 through November 2008), the instrument has acquired data swaths on the dayside of the planet, at an equator-crossing local solar time of about 3:00 p.m. We are analyzing the MARCI-VIS multispectral imaging data from the MRO primary mission in order to investigate (a) color variations in the surface and their potential relationship to variations in iron mineralogy; and (b) the time variability of surface albedo features at the approx. 1 km/pixel scale typical of MARCI nadir-pointed observations. Raw MARCI images were calibrated to radiance factor (I/F) using pre-flight and in-flight calibration files and a pipeline calibration process developed by the science team. We are using these calibrated MARCI files to generate map-projected mosaics of each of the 30 USGS standard quadrangles on Mars in each of the five MARCI-VIS bands. Our mosaicking software searches the MARCI data set to identify files that match a user- defined set of limits such as latitude, longitude, Ls, incidence angle, emission angle, and year. Each of the files matching the desired criteria is then map-projected and inserted in series into an output mosaic covering the desired lat/lon range. In cases of redundant coverage of the same pixels by different files, the user can set the program to use the pixel with the lowest I/F value for each individual MARCI-VIS band, thus

  19. Landsat ETM+ False-Color Image Mosaics of Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2007-01-01

    In 2005, the U.S. Agency for International Development and the U.S. Trade and Development Agency contracted with the U.S. Geological Survey to perform assessments of the natural resources within Afghanistan. The assessments concentrate on the resources that are related to the economic development of that country. Therefore, assessments were initiated in oil and gas, coal, mineral resources, water resources, and earthquake hazards. All of these assessments require geologic, structural, and topographic information throughout the country at a finer scale and better accuracy than that provided by the existing maps, which were published in the 1970's by the Russians and Germans. The very rugged terrain in Afghanistan, the large scale of these assessments, and the terrorist threat in Afghanistan indicated that the best approach to provide the preliminary assessments was to use remotely sensed, satellite image data, although this may also apply to subsequent phases of the assessments. Therefore, the first step in the assessment process was to produce satellite image mosaics of Afghanistan that would be useful for these assessments. This report discusses the production of the Landsat false-color image database produced for these assessments, which was produced from the calibrated Landsat ETM+ image mosaics described by Davis (2006).

  20. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2015-08-01

    Future direct-imaging exoplanet missions such as WFIRST/AFTA, Exo-C, and Exo-S will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These “cold” exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  1. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2016-01-01

    Future direct-imaging exoplanet missions such as WFIRST will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These "cold" exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  2. Radar Image with Color as Height, Old Khmer Road, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image shows the Old Khmer Road (Inrdratataka-Bakheng causeway) in Cambodia extending from the 9th Century A.D. capitol city of Hariharalaya in the lower right portion of the image to the later 10th Century AD capital of Yasodharapura. This was located in the vicinity of Phnom Bakheng (not shown in image). The Old Road is believed to be more than 1000 years old. Its precise role and destination within the 'new' city at Angkor is still being studied by archeologists. But wherever it ended, it not only offered an immense processional way for the King to move between old and new capitols, it also linked the two areas, widening the territorial base of the Khmer King. Finally, in the past and today, the Old Road managed the waters of the floodplain. It acted as a long barrage or dam for not only the natural streams of the area but also for the changes brought to the local hydrology by Khmer population growth.

    The image was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Image brightness is from the P-band (68 cm wavelength) radar backscatter, which is a measure of how much energy the surface reflects back towards the radar. Color is used to represent elevation contours. One cycle of color represents 20 m of elevation change, that is going from blue to red to yellow to green and back to blue again corresponds to 20 m of elevation change. Image dimensions are approximately 3.4 km by 3.5 km with a pixel spacing of 5 m. North is at top.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  3. New Orleans Topography, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The city of New Orleans, situated on the southern shore of Lake Pontchartrain, is shown in this radar image from the Shuttle Radar Topography Mission (SRTM). In this image bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the SRTM mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is near the center of this scene, between the lake and the Mississippi River. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest overwater highway bridge. Major portions of the city of New Orleans are actually below sea level, and although it is protected by levees and sea walls that are designed to protect against storm surges of 18 to 20 feet, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface

  4. Filter-free image sensor pixels comprising silicon nanowires with selective color absorption.

    PubMed

    Park, Hyunsung; Dan, Yaping; Seo, Kwanyong; Yu, Young J; Duane, Peter K; Wober, Munib; Crozier, Kenneth B

    2014-01-01

    The organic dye filters of conventional color image sensors achieve the red/green/blue response needed for color imaging, but have disadvantages related to durability, low absorption coefficient, and fabrication complexity. Here, we report a new paradigm for color imaging based on all-silicon nanowire devices and no filters. We fabricate pixels consisting of vertical silicon nanowires with integrated photodetectors, demonstrate that their spectral sensitivities are governed by nanowire radius, and perform color imaging. Our approach is conceptually different from filter-based methods, as absorbed light is converted to photocurrent, ultimately presenting the opportunity for very high photon efficiency. PMID:24588103

  5. Vicarious calibration of the Geostationary Ocean Color Imager.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram; Oh, Im Sang

    2015-09-01

    Measurements of ocean color from Geostationary Ocean Color Imager (GOCI) with a moderate spatial resolution and a high temporal frequency demonstrate high value for a number of oceanographic applications. This study aims to propose and evaluate the calibration of GOCI as needed to achieve the level of radiometric accuracy desired for ocean color studies. Previous studies reported that the GOCI retrievals of normalized water-leaving radiances (nLw) are biased high for all visible bands due to the lack of vicarious calibration. The vicarious calibration approach described here relies on the assumed constant aerosol characteristics over the open-ocean sites to accurately estimate atmospheric radiances for the two near-infrared (NIR) bands. The vicarious calibration of visible bands is performed using in situ nLw measurements and the satellite-estimated atmospheric radiance using two NIR bands over the case-1 waters. Prior to this analysis, the in situ nLw spectra in the NIR are corrected by the spectrum optimization technique based on the NIR similarity spectrum assumption. The vicarious calibration gain factors derived for all GOCI bands (except 865nm) significantly improve agreement in retrieved remote-sensing reflectance (Rrs) relative to in situ measurements. These gain factors are independent of angular geometry and possible temporal variability. To further increase the confidence in the calibration gain factors, a large data set from shipboard measurements and AERONET-OC is used in the validation process. It is shown that the absolute percentage difference of the atmospheric correction results from the vicariously calibrated GOCI system is reduced by ~6.8%. PMID:26368426

  6. Using Color and Grayscale Images to Teach Histology to Color-Deficient Medical Students

    ERIC Educational Resources Information Center

    Rubin, Lindsay R.; Lackey, Wendy L.; Kennedy, Frances A.; Stephenson, Robert B.

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness") in the…

  7. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  8. [Color processing of ultrasonographic images in extracorporeal lithotripsy].

    PubMed

    Lardennois, B; Ziade, A; Walter, K

    1991-02-01

    A number of technical difficulties are encountered in the ultrasonographic detection of renal stones which unfortunately limit its performance. The margin of error of firing in extracorporeal shock-wave lithotripsy (ESWL) must be reduced to a minimum. The role of the ultrasonographic monitoring during lithotripsy is also essential: continuous control of the focussing of the short-wave beamand assessment if the quality of fragmentation. The authors propose to improve ultrasonographic imaging in ESWL by means of intraoperative colour processing of the stone. Each shot must be directed to its target with an economy of vision avoiding excessive fatigue. The principle of the technique consists of digitalization of the ultrasound video images using a Macintosh Mac 2 computer. The Graphis Paint II program is interfaced directly with the Quick Capture card and recovers the images on its work surface in real time. The program is then able to attribute to each of these 256 shades of grey any one of the 16.6 million colours of the Macintosh universe with specific intensity and saturation. During fragmentation, using the principle of a palette, the stone changes colour from green to red indicating complete fragmentation. A Color Space card converts the digital image obtained into a video analogue source which is visualized on the monitor. It can be superimposed and/or juxtaposed with the source image by means of a multi-standard mixing table. Colour processing of ultrasonographic images in extracoporeal shockwave lithotripsy allows better visualization of the stones and better follow-up of fragmentation and allows the shockwave treatment to be stopped earlier. It increases the stone-free performance at 6 months. This configuration will eventually be able to integrate into the ultrasound apparatus itself. PMID:1364639

  9. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  10. Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images

    SciTech Connect

    Kimmel, R.; Malladi, R.; Sochen, N.

    1997-02-01

    A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as a surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.

  11. Single camera imaging system for color and near-infrared fluorescence image guided surgery

    PubMed Central

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-01-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 μM can be detected when the excitation irradiance is 3.92 mW/cm2 at an exposure time of 10 ms. PMID:25136502

  12. Toward a unified color space for perception-based image processing.

    PubMed

    Lissner, Ingmar; Urban, Philipp

    2012-03-01

    Image processing methods that utilize characteristics of the human visual system require color spaces with certain properties to operate effectively. After analyzing different types of perception-based image processing problems, we present a list of properties that a unified color space should have. Due to contradictory perceptual phenomena and geometric issues, a color space cannot incorporate all these properties. We therefore identify the most important properties and focus on creating opponent color spaces without cross contamination between color attributes (i.e., lightness, chroma, and hue) and with maximum perceptual uniformity induced by color-difference formulas. Color lookup tables define simple transformations from an initial color space to the new spaces. We calculate such tables using multigrid optimization considering the Hung and Berns data of constant perceived hue and the CMC, CIE94, and CIEDE2000 color-difference formulas. The resulting color spaces exhibit low cross contamination between color attributes and are only slightly less perceptually uniform than spaces optimized exclusively for perceptual uniformity. We compare the CIEDE2000-based space with commonly used color spaces in two examples of perception-based image processing. In both cases, standard methods show improved results if the new space is used. All color-space transformations and examples are provided as MATLAB codes on our website. PMID:21824846

  13. An investigation on the intra-sample distribution of cotton color by using image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  14. Comparison of color image segmentations for lane following

    NASA Astrophysics Data System (ADS)

    Sandt, Frederic; Aubert, Didier

    1993-05-01

    For ten years, unstructured road following has been the subject of many studies. Road following must support the automatic navigation, at reasonable speed, of mobile robots on irregular paths and roads, with unhomogeneous surfaces and under variable lighting conditions. Civil and military applications of this technology include transportation, logistics, security and engineering. The definition of our lane following system requires an evaluation of the existing technologies. Although the various operational systems converge on a color perception and a region segmentation optimizing discrimination and stability respectively, the treatments and performances vary. In this paper, the robustness of four operational systems and two connected techniques are compared according to common evaluation criteria. We identify typical situations which constitute a basis for the realization of an image database. We describe the process of experimentation conceived for the comparative analysis of performances. The analytical results are useful in order to infer a few optimal combinations of techniques driven by the situations, and to define the present limits of the color perception's validity.

  15. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  16. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging.

    PubMed

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-12-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  17. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging

    PubMed Central

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-01-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  18. [Image Feature Extraction and Discriminant Analysis of Xinjiang Uygur Medicine Based on Color Histogram].

    PubMed

    Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat

    2015-06-01

    Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine. PMID:26485983

  19. Color Index Imaging of the Stellar Stream Around NGC 5907

    NASA Astrophysics Data System (ADS)

    Laine, Seppo; Grillmair, Carl J.; Martinez-Delgado, David; Romanowsky, Aaron J.; Capak, Peter; Arendt, Richard G.; Ashby, Matthew; Davies, James E.; Majewski, Steven R.; GaBany, R. Jay

    2015-01-01

    We have obtained deep g, r, and i-band Subaru and ultra-deep 3.6 micron Spitzer/IRAC images of parts of the stellar stream around the nearby edge-on disk galaxy NGC 5907. We report on the color index distribution of the resolved emission along the stream, and indicators of recent star formation associated with the stream. We present scenarios regarding the nature of the disrupted satellite galaxy, based on our data. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work is based in part on data collected with the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. Support for this work was provided by NASA through an award issued by JPL/Caltech.

  20. Color Index Imaging of the Stellar Stream Around NGC 5907

    NASA Astrophysics Data System (ADS)

    Laine, Seppo; Grillmair, Carl J.; Martinez-Delgado, David; Romanowsky, Aaron; Capak, Peter; Arendt, Richard G.; Ashby, M. L. N.; Davies, James; Majewski, Steven; GaBany, R. Jay

    2015-08-01

    We have obtained deep g, r, and i-band Subaru and ultra-deep 3.6 micron Spitzer/IRAC images of parts of the spectacular, multiply-looped stellar stream around the nearby edge-on disk galaxy NGC 5907. We report on the color index distribution of the integrated starlight and the derived stellar populations along the stream. We present scenarios regarding the nature of the disrupted satellite galaxy, based on our data. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work is based in part on data collected with the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. Support for this work was provided by NASA through an award issued by JPL/Caltech.

  1. Cloud screening Coastal Zone Color Scanner images using channel 5

    NASA Technical Reports Server (NTRS)

    Eckstein, B. A.; Simpson, J. J.

    1991-01-01

    Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.

  2. Lifting-based reversible color transformations for image compression

    NASA Astrophysics Data System (ADS)

    Malvar, Henrique S.; Sullivan, Gary J.; Srinivasan, Sridhar

    2008-08-01

    This paper reviews a set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic. The YCoCg transform and its reversible form YCoCg-R can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity. We also present extensions of the YCoCg transform for four-channel CMYK pixel data. Thanks to their reversibility under integer arithmetic, these transforms are useful for both lossy and lossless compression. Versions of these transforms are used in the HD Photo image coding technology (which is the basis for the upcoming JPEG XR standard) and in recent editions of the H.264/MPEG-4 AVC video coding standard.

  3. Visualization of multivariate image data using image fusion and perceptually optimized color scales based on sRGB

    NASA Astrophysics Data System (ADS)

    Saalbach, Axel; Twellmann, Thorsten; Nattkemper, Tim; White, Mark; Khazen, Michael; Leach, Martin O.

    2004-05-01

    Due to the rapid progress in medical imaging technology, analysis of multivariate image data is receiving increased interest. However, their visual exploration is a challenging task since it requires the integration of information from many different sources which usually cannot be perceived at once by an observer. Image fusion techniques are commonly used to obtain information from multivariate image data, while psychophysical aspects of data visualization are usually not considered. Visualization is typically achieved by means of device derived color scales. With respect to psychophysical aspects of visualization, more sophisticated color mapping techniques based on device independent (and perceptually uniform) color spaces like CIELUV have been proposed. Nevertheless, the benefit of these techniques is limited by the fact that they require complex color space transformations to account for device characteristics and viewing conditions. In this paper we present a new framework for the visualization of multivariate image data using image fusion and color mapping techniques. In order to overcome problems of consistent image presentations and color space transformations, we propose perceptually optimized color scales based on CIELUV in combination with sRGB (IEC 61966-2-1) color specification. In contrast to color definitions based purely on CIELUV, sRGB data can be used directly under reasonable conditions, without complex transformations and additional information. In the experimental section we demonstrate the advantages of our approach in an application of these techniques to the visualization of DCE-MRI images from breast cancer research.

  4. Image mosaicking based on feature points using color-invariant values

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  5. Application of digital color image analysis for colorimetric quality evaluation of surface defects on paint coatings

    NASA Astrophysics Data System (ADS)

    Steckert, Carsten; Witt, Klaus

    2000-12-01

    A method for the quality management of paint producers was developed that allows for an objective description of inhomogeneous fading of paint coatings after free weathering using relevant metric quantities such as color contrast, gradient of color contrast, and geometric features of the inhomogeneous structures. These may be quantified with the method of digital color image analysis. The first step to apply this technique means a systematic investigation of the color transformation properties specific of the selected input/output devices used for digital imaging. To build a color management system mathematical models of the color transformation processes were optimized and embedded in a commercial color image analysis software. The needed metric parameters, that evaluate the damages on the coated surfaces, must be deduced for highest possible agreement with visual judgements of experts on the categorization of the damages. 150 samples of paint coatings after weathering were selected to investigate this correlation.

  6. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. PMID:24976104

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Parwan mineral district in Afghanistan: Chapter CC in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Parwan mineral district, which has gold and copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006, 2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such

  8. Edge detection, color quantization, segmentation, texture removal, and noise reduction of color image using quaternion iterative filtering

    NASA Astrophysics Data System (ADS)

    Hsiao, Yu-Zhe; Pei, Soo-Chang

    2014-07-01

    Empirical mode decomposition (EMD) is a simple, local, adaptive, and efficient method for nonlinear and nonstationary signal analysis. However, for dealing with multidimensional signals, EMD and its variants such as bidimensional EMD (BEMD) and multidimensional EMD (MEMD) are very slow due to the needs of a large amount of envelope interpolations. Recently, a method called iterative filtering has been proposed. This filtering-based method is not as precise as EMD but its processing speed is very fast and can achieve comparable results as EMD does in many image and signal processing applications. We combine quaternion algebra and iterative filtering to achieve the edge detection, color quantization, segmentation, texture removal, and noise reduction task of color images. We can obtain similar results by using quaternion combined with EMD; however, as mentioned before, EMD is slow and cumbersome. Therefore, we propose to use quaternion iterative filtering as an alternative method for quaternion EMD (QEMD). The edge of color images can be detected by using intrinsic mode functions (IMFs) and the color quantization results can be obtained from residual image. The noise reduction algorithm of our method can be used to deal with Gaussian, salt-and-pepper, speckle noise, etc. The peak signal-to-noise ratio results are satisfactory and the processing speed is also very fast. Since textures in a color image are high-frequency components, we also can use quaternion iterative filtering to decompose a color image into many high- and low-frequency IMFs and remove textures by eliminating high-frequency IMFs.

  9. TinyECCK: Efficient Elliptic Curve Cryptography Implementation over GF(2m) on 8-Bit Micaz Mote

    NASA Astrophysics Data System (ADS)

    Seo, Seog Chung; Han, Dong-Guk; Kim, Hyung Chan; Hong, Seokhie

    In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over GF(2m) on sensor motes using small word size is not appropriate because XOR multiplication over GF(2m) is not efficiently supported by current low-powered microprocessors. Although there are some implementations over GF(2m) on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over GF(2m) are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over GF(2163) can be decreased by 21.1% and 24.7%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around 15-19%. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve-a kind of TinyOS package supporting elliptic curve operations) which is the first implementation of Koblitz curve on sensor motes as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over GF(2m) can be faster than that over GF(p) on 8-bit Atmegal28 processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over GF(p). TinyECCK with sect163kl can generate a signature and verify it in 1.37 and 2.32 secs on a Micaz mote with 13,748-byte of ROM and 1,004-byte of RAM.

  10. Best Color Image of Jupiter's Little Red Spot

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This amazing color portrait of Jupiter's 'Little Red Spot' (LRS) combines high-resolution images from the New Horizons Long Range Reconnaissance Imager (LORRI), taken at 03:12 UT on February 27, 2007, with color images taken nearly simultaneously by the Wide Field Planetary Camera 2 (WFPC2) on the Hubble Space Telescope. The LORRI images provide details as fine as 9 miles across (15 kilometers), which is approximately 10 times better than Hubble can provide on its own. The improved resolution is possible because New Horizons was only 1.9 million miles (3 million kilometers) away from Jupiter when LORRI snapped its pictures, while Hubble was more than 500 million miles (800 million kilometers) away from the Gas Giant planet.

    The Little Red Spot is the second largest storm on Jupiter, roughly 70% the size of the Earth, and it started turning red in late-2005. The clouds in the Little Red Spot rotate counterclockwise, or in the anticyclonic direction, because it is a high-pressure region. In that sense, the Little Red Spot is the opposite of a hurricane on Earth, which is a low-pressure region - and, of course, the Little Red Spot is far larger than any hurricane on Earth.

    Scientists don't know exactly how or why the Little Red Spot turned red, though they speculate that the change could stem from a surge of exotic compounds from deep within Jupiter, caused by an intensification of the storm system. In particular, sulfur-bearing cloud droplets might have been propelled about 50 kilometers into the upper level of ammonia clouds, where brighter sunlight bathing the cloud tops released the red-hued sulfur embedded in the droplets, causing the storm to turn red. A similar mechanism has been proposed for the Little Red Spot's 'older brother,' the Great Red Spot, a massive energetic storm system that has persisted for over a century.

    New Horizons is providing an opportunity to examine an 'infant' red storm system in detail, which may help scientists

  11. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni1 mineral district in Afghanistan: Chapter DD in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni1 mineral district, which has spectral reflectance anomalies indicative of clay, aluminum, gold, silver, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such

  12. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  13. On independent color space transformations for the compression of CMYK images.

    PubMed

    de Queiroz, R L

    1999-01-01

    Device and image-independent color space transformations for the compression of CMYK images were studied. A new transformation (to a YYCC color space) was developed and compared to known ones. Several tests were conducted leading to interesting conclusions. Among them, color transformations are not always advantageous over independent compression of CMYK color planes. Another interesting conclusion is that chrominance subsampling is rarely advantageous in this context. Also, it is shown that transformation to YYCC consistently outperforms the transformation to YCbCrK, while being competitive with the image-dependent KLT-based approach. PMID:18267416

  14. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  15. Radar Image with Color as Height, Hariharalaya, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color--from blue to red to yellow to green and back to blue again--represents 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  16. Radar Image with Color as Height, Nokor Pheas Trapeng, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Nokor Pheas Trapeng is the name of the large black rectangular feature in the center-bottom of this image, acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Its Khmer name translates as 'Tank of the City of Refuge'. The immense tank is a typical structure built by the Khmer for water storage and control, but its size is unusually large. This suggests, as does 'city' in its name, that in ancient times this area was far more prosperous than today.

    A visit to this remote, inaccessible site was made in December 1998. The huge water tank was hardly visible. From the radar data we knew that the tank stretched some 500 meters (1,640 feet) from east to west. However, between all the plants growing on the surface of the water and the trees and other vegetation in the area, the water tank blended with the surrounding topography. Among the vegetation, on the northeast of the tank, were remains of an ancient temple and a spirit shrine. So although far from the temples of Angkor, to the southeast, the ancient water structure is still venerated by the local people.

    The image covers an area approximately 9.5 by 8.7 kilometers (5.9 by 5.4 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going from blue to red to yellow to green and back to blue again corresponds to 20 meters (65.6 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate

  17. Radar Image with Color as Height, Sman Teng, Temple, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Cambodia's Angkor region, taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR), reveals a temple (upper-right) not depicted on early 19th Century French archeological survey maps and American topographic maps. The temple, known as 'Sman Teng,' was known to the local Khmer people, but had remained unknown to historians due to the remoteness of its location. The temple is thought to date to the 11th Century: the heyday of Angkor. It is an important indicator of the strategic and natural resource contributions of the area northwest of the capitol, to the urban center of Angkor. Sman Teng, the name designating one of the many types of rice enjoyed by the Khmer, was 'discovered' by a scientist at NASA's Jet Propulsion Laboratory, Pasadena, Calif., working in collaboration with an archaeological expert on the Angkor region. Analysis of this remote area was a true collaboration of archaeology and technology. Locating the temple of Sman Teng required the skills of scientists trained to spot the types of topographic anomalies that only radar can reveal.

    This image, with a pixel spacing of 5 meters (16.4 feet), depicts an area of approximately 5 by 4.7 kilometers (3.1 by 2.9 miles). North is at top. Image brightness is from the P-band (68 centimeters, or 26.8 inches) wavelength radar backscatter, a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 25 meters (82 feet) of elevation change, so going from blue to red to yellow to green and back to blue again corresponds to 25 meters (82 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data

  18. 32-megapixel dual-color CCD imaging system

    NASA Astrophysics Data System (ADS)

    Stubbs, Christopher W.; Marshall, Stuart; Cook, Kenneth H.; Hills, Robert F.; Noonan, Joseph; Akerlof, Carl W.; Alcock, Charles R.; Axelrod, Timothy S.; Bennett, D.; Dagley, K.; Freeman, K. C.; Griest, Kim; Park, Hye-Sook; Perlmutter, Saul; Peterson, Bruce A.; Quinn, Peter J.; Rodgers, A. W.; Sosin, C.; Sutherland, W. J.

    1993-07-01

    We have developed an astronomical imaging system that incorporates a total of eight 2048 X 2048 pixel CCDs into two focal planes, to allow simultaneous imaging in two colors. Each focal plane comprises four 'edge-buttable' detector arrays, on custom Kovar mounts. The clocking and bias voltage levels for each CCD are independently adjustable, but all the CCDs are operated synchronously. The sixteen analog outputs (two per chip) are measured at 16 bits with commercially available correlated double sampling A/D converters. The resulting 74 MBytes of data per frame are transferred over fiber optic links into dual-ported VME memory. The total readout time is just over one minute. We obtain read noise ranging from 6.5 e- to 10 e- for the various channels when digitizing at 34 Kpixels/sec, with full well depths (MPP mode) of approximately 100,000 e- per 15 micrometers X 15 micrometers pixel. This instrument is currently being used in a search of gravitational microlensing from compact objects in our Galactic halo, using the newly refurbished 1.3 m telescope at the Mt. Stromlo Observatory, Australia.

  19. Hue-preserving local contrast enhancement and illumination compensation for outdoor color images

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David; Christnacher, Frank

    2015-10-01

    Real-time applications in the field of security and defense use dynamic color camera systems to gain a better understanding of outdoor scenes. To enhance details and improve the visibility in images it is required to per- form local image processing, and to reduce lightness and color inconsistencies between images acquired under different illumination conditions it is required to compensate illumination effects. We introduce an automatic hue-preserving local contrast enhancement and illumination compensation approach for outdoor color images. Our approach is based on a shadow-weighted intensity-based Retinex model which enhances details and compensates the illumination effect on the lightness of an image. The Retinex model exploits information from a shadow detection approach to reduce lightness halo artifacts on shadow boundaries. We employ a hue-preserving color transformation to obtain a color image based on the original color information. To reduce color inconsistencies between images acquired under different illumination conditions we process the saturation using a scaling function. The approach has been successfully applied to static and dynamic color image sequences of outdoor scenes and an experimental comparison with previous Retinex-based approaches has been carried out.

  20. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  1. Going Beyond RGB: How to Create Color Composite Images that Convey the Science

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Z. G.; Frattare, L. M.; English, J.; Pu'uohau-Pummill, K.

    2010-01-01

    The quality of modern astronomical data and the agility of current image-processing software enable new ways to visualize data as images. Two developments in particular have led to a fundamental change in how astronomical images may be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical datasets to be combined into a color composite image. Furthermore, any color may be assigned to each dataset, not just red, green or blue. With this technique, images with as many as eight datasets have been produced. Each dataset is intensity scaled and colorized independently, creating an immense parameter space that may be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. I will demonstrate how color composite images can be assembled in Photoshow and GIMP. I will also give examples of how color can be effectively used to convey the science of interest.

  2. Radar Image with Color as Height, Lovea, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Lovea, Cambodia, was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Lovea, the roughly circular feature in the middle-right of the image, rises some 5 meters (16.4 feet) above the surrounding terrain. Lovea is larger than many of the other mound sites with a diameter of greater than 300 meters (984.3 feet). However, it is one of a number highlighted by the radar imagery. The present-day village of Lovea does not occupy all of the elevated area. However, at the center of the mound is an ancient spirit post honoring the legendary founder of the village. The mound is surrounded by earthworks and has vestiges of additional curvilinear features. Today, as in the past, these harnessed water during the rainy season, and conserved it during the long dry months of the year.

    The village of Lovea located on the mound was established in pre-Khmer times, probably before 500 A.D. In the lower left portion of the image is a large trapeng and square moat. These are good examples of construction during the historical 9th to 14th Century A.D. Khmer period; construction that honored and protected earlier circular villages. This suggests a cultural and technical continuity between prehistoric circular villages and the immense urban site of Angkor. This connection is one of the significant finds generated by NASA's radar imaging of Angkor. It shows that the city of Angkor was a particularly Khmer construction. The temple forms and water management structures of Angkor were the result of pre-existing Khmer beliefs and methods of water management.

    Image dimensions are approximately 6.3 by 4.7 kilometers (3.9 by 2.9 miles). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going

  3. Color imaging of Mars by the High Resolution Imaging Science Experiment (HiRISE)

    USGS Publications Warehouse

    Delamere, W.A.; Tornabene, L.L.; McEwen, A.S.; Becker, K.; Bergstrom, J.W.; Bridges, N.T.; Eliason, E.M.; Gallagher, D.; Herkenhoff, K. E.; Keszthelyi, L.; Mattson, S.; McArthur, G.K.; Mellon, M.T.; Milazzo, M.; Russell, P.S.; Thomas, N.

    2010-01-01

    HiRISE has been producing a large number of scientifically useful color products of Mars and other planetary objects. The three broad spectral bands, coupled with the highly sensitive 14 bit detectors and time delay integration, enable detection of subtle color differences. The very high spatial resolution of HiRISE can augment the mineralogic interpretations based on multispectral (THEMIS) and hyperspectral datasets (TES, OMEGA and CRISM) and thereby enable detailed geologic and stratigraphic interpretations at meter scales. In addition to providing some examples of color images and their interpretation, we describe the processing techniques used to produce them and note some of the minor artifacts in the output. We also provide an example of how HiRISE color products can be effectively used to expand mineral and lithologic mapping provided by CRISM data products that are backed by other spectral datasets. The utility of high quality color data for understanding geologic processes on Mars has been one of the major successes of HiRISE. ?? 2009 Elsevier Inc.

  4. Color Doppler imaging of the retrobulbar vessels in diabetic retinopathy

    PubMed Central

    Walasik-Szemplińska, Dorota

    2014-01-01

    Diabetes is a metabolic disease characterized by elevated blood glucose level due to impaired insulin secretion and activity. Chronic hyperglycemia leads to functional disorders of numerous organs and to their damage. Vascular lesions belong to the most common late complications of diabetes. Microangiopathic lesions can be found in the eyeball, kidneys and nervous system. Macroangiopathy is associated with coronary and peripheral vessels. Diabetic retinopathy is the most common microangiopathic complication characterized by closure of slight retinal blood vessels and their permeability. Despite intensive research, the pathomechanism that leads to the development and progression of diabetic retinopathy is not fully understood. The examinations used in assessing diabetic retinopathy usually involve imaging of the vessels in the eyeball and the retina. Therefore, the examinations include: fluorescein angiography, optical coherence tomography of the retina, B-mode ultrasound imaging, perimetry and digital retinal photography. There are many papers that discuss the correlations between retrobulbar circulation alterations and progression of diabetic retinopathy based on Doppler sonography. Color Doppler imaging is a non-invasive method enabling measurements of blood flow velocities in small vessels of the eyeball. The most frequently assessed vessels include: the ophthalmic artery, which is the first branch of the internal carotid artery, as well as the central retinal vein and artery, and the posterior ciliary arteries. The analysis of hemodynamic alterations in the retrobulbar vessels may deliver important information concerning circulation in diabetes and help to answer the question whether there is a relation between the progression of diabetic retinopathy and the changes observed in blood flow in the vessels of the eyeball. This paper presents the overview of literature regarding studies on blood flow in the vessels of the eyeball in patients with diabetic

  5. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators

    PubMed Central

    Chiao, Chuan-Chin; Wickiser, J. Kenneth; Allen, Justine J.; Genter, Brock; Hanlon, Roger T.

    2011-01-01

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision. PMID:21576487

  6. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators.

    PubMed

    Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T

    2011-05-31

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision. PMID:21576487

  7. Brightness, lightness, and specifying color in high-dynamic-range scenes and images

    NASA Astrophysics Data System (ADS)

    Fairchild, Mark D.; Chen, Ping-Hsu

    2011-01-01

    Traditional color spaces have been widely used in a variety of applications including digital color imaging, color image quality, and color management. These spaces, however, were designed for the domain of color stimuli typically encountered with reflecting objects and image displays of such objects. This means the domain of stimuli with luminance levels from slightly above zero to that of a perfect diffuse white (or display white point). This limits the applicability of such spaces to color problems in HDR imaging. This is caused by their hard intercepts at zero luminance/lightness and by their uncertain applicability for colors brighter than diffuse white. To address HDR applications, two new color spaces were recently proposed, hdr-CIELAB and hdr-IPT. They are based on replacing the power-function nonlinearities in CIELAB and IPT with more physiologically plausible hyperbolic functions optimized to most closely simulate the original color spaces in the diffuse reflecting color domain. This paper presents the formulation of the new models, evaluations using Munsell data in comparison with CIELAB, IPT, and CIECAM02, two sets of lightness-scaling data above diffuse white, and various possible formulations of hdr-CIELAB and hdr-IPT to predict the visual results.

  8. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  9. Plasmonics-Based Multifunctional Electrodes for Low-Power-Consumption Compact Color-Image Sensors.

    PubMed

    Lin, Keng-Te; Chen, Hsuen-Li; Lai, Yu-Sheng; Chi, Yi-Min; Chu, Ting-Wei

    2016-03-01

    High pixel density, efficient color splitting, a compact structure, superior quantum efficiency, and low power consumption are all important features for contemporary color-image sensors. In this study, we developed a surface plasmonics-based color-image sensor displaying a high photoelectric response, a microlens-free structure, and a zero-bias working voltage. Our compact sensor comprised only (i) a multifunctional electrode based on a single-layer structured aluminum (Al) film and (ii) an underlying silicon (Si) substrate. This approach significantly simplifies the device structure and fabrication processes; for example, the red, green, and blue color pixels can be prepared simultaneously in a single lithography step. Moreover, such Schottky-based plasmonic electrodes perform multiple functions, including color splitting, optical-to-electrical signal conversion, and photogenerated carrier collection for color-image detection. Our multifunctional, electrode-based device could also avoid the interference phenomenon that degrades the color-splitting spectra found in conventional color-image sensors. Furthermore, the device took advantage of the near-field surface plasmonic effect around the Al-Si junction to enhance the optical absorption of Si, resulting in a significant photoelectric current output even under low-light surroundings and zero bias voltage. These plasmonic Schottky-based color-image devices could convert a photocurrent directly into a photovoltage and provided sufficient voltage output for color-image detection even under a light intensity of only several femtowatts per square micrometer. Unlike conventional color image devices, using voltage as the output signal decreases the area of the periphery read-out circuit because it does not require a current-to-voltage conversion capacitor or its related circuit. Therefore, this strategy has great potential for direct integration with complementary metal-oxide-semiconductor (CMOS)-compatible circuit

  10. Color Image of Death Valley, California from SIR-C

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This radar image shows the area of Death Valley, California and the different surface types in the area. Radar is sensitive to surface roughness with rough areas showing up brighter than smooth areas, which appear dark. This is seen in the contrast between the bright mountains that surround the dark, smooth basins and valleys of Death Valley. The image shows Furnace Creek alluvial fan (green crescent feature) at the far right, and the sand dunes near Stove Pipe Wells at the center. Alluvial fans are gravel deposits that wash down from the mountains over time. Several other alluvial fans (semicircular features) can be seen along the mountain fronts in this image. The dark wrench-shaped feature between Furnace Creek fan and the dunes is a smooth flood-plain which encloses Cottonball Basin. Elevations in the valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using these radar data to help answer a number of different questions about Earth's geology including how alluvial fans form and change through time in response to climatic changes and earthquakes. The image is centered at 36.629 degrees north latitude, 117.069 degrees west longitude. Colors in the image represent different radar channels as follows: red =L-band horizontally polarized transmitted, horizontally polarized received (LHH); green =L-band horizontally transmitted, vertically received (LHV) and blue = CHV.

    SIR-C/X-SAR is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground

  11. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  12. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  13. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  14. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  15. Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.

    PubMed

    Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir

    2016-08-01

    Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis. PMID:27164577

  16. STIFF: Converting Scientific FITS Images to TIFF

    NASA Astrophysics Data System (ADS)

    Bertin, Emmanuel

    2011-10-01

    STIFF is a program that converts scientific FITS1 images to the more popular TIFF2 format for illustration purposes. Most FITS readers and converters do not do a proper job at converting FITS image data to 8 bits. 8-bit images stored in JPEG, PNG or TIFF files have the intensities implicitely stored in a non-linear way. Most current FITS image viewers and converters provide the user an incorrect translation of the FITS image content by simply rescaling linearly input pixel values. A first consequence is that the people working on astronomical images usually have to apply narrow intensity cuts or square-root or logarithmic intensity transformations to actually see something on their deep-sky images. A less obvious consequence is that colors obtained by combining images processed this way are not consistent across such a large range of surface brightnesses. Though with other software the user is generally afforded a choice of nonlinear transformations to apply in order to make the faint stuff stand out more clearly in the images, with the limited selection of choices provides, colors will not be accurately rendered, and some manual tweaking will be necessary. The purpose of STIFF is to produce beautiful pictures in an automatic and consistent way.

  17. Multi-color imaging of selected southern interacting galaxies

    NASA Technical Reports Server (NTRS)

    Smith, Eric P.; Hintzen, Paul

    1990-01-01

    The authors present preliminary results from a study of selected Arp-Madore Southern Hemisphere peculiar galaxies. Broadband charge coupled device (CCD) images (BVRI) of a subset of these galaxies allow us to study each galaxy's optical morphology, color, and (in a crude manner) degree of nuclear activity, and to compare them with similar data we possess on other active galaxies. Many of these galaxies have optical morphologies closely resembling those of powerful radio galaxies (Smith and Heckman 1989), yet their radio emission is unremarkable. Accurate positions for subsequent spectroscopic studies have been determined along with broad band photometry and morphology studies. Detailed observations of these comparatively bright, low-redshift, well-resolved interacting systems should aid our understanding of the role interactions play in triggering galaxy activity. This work is the initial effort in a long term project to study the role played by the dynamics of the interaction in the production and manifestations of activity in galaxies, and the frequency of galaxy mergers.

  18. Private anonymous fingerprinting for color images in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Abdul, W.; Gaborit, P.; Carré, P.

    2010-01-01

    An online buyer of multimedia content does not want to reveal his identity or his choice of multimedia content whereas the seller or owner of the content does not want the buyer to further distribute the content illegally. To address these issues we present a new private anonymous fingerprinting protocol. It is based on superposed sending for communication security, group signature for anonymity and traceability and single database private information retrieval (PIR) to allow the user to get an element of the database without giving any information about the acquired element. In the presence of a semi-honest model, the protocol is implemented using a blind, wavelet based color image watermarking scheme. The main advantage of the proposed protocol is that both the user identity and the acquired database element are unknown to any third party and in the case of piracy, the pirate can be identified using the group signature scheme. The robustness of the watermarking scheme against Additive White Gaussian Noise is also shown.

  19. Application of the airborne ocean color imager for commercial fishing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.

    1993-01-01

    The objective of the investigation was to develop a commercial remote sensing system for providing near-real-time data (within one day) in support of commercial fishing operations. The Airborne Ocean Color Imager (AOCI) had been built for NASA by Daedalus Enterprises, Inc., but it needed certain improvements, data processing software, and a delivery system to make it into a commercial system for fisheries. Two products were developed to support this effort: the AOCI with its associated processing system and an information service for both commercial and recreational fisheries to be created by Spectro Scan, Inc. The investigation achieved all technical objectives: improving the AOCI, creating software for atmospheric correction and bio-optical output products, georeferencing the output products, and creating a delivery system to get those products into the hands of commercial and recreational fishermen in near-real-time. The first set of business objectives involved Daedalus Enterprises and also were achieved: they have an improved AOCI and new data processing software with a set of example data products for fisheries applications to show their customers. Daedalus' marketing activities showed the need for simplification of the product for fisheries, but they successfully marketed the current version to an Italian consortium. The second set of business objectives tasked Spectro Scan to provide an information service and they could not be achieved because Spectro Scan was unable to obtain necessary venture capital to start up operations.

  20. Voyager 2 Color Image of Enceladus, Almost Full Disk

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This color Voyager 2 image mosaic shows the water-ice-covered surface of Enceladus, one of Saturn's icy moons. Enceladus' diameter of just 500 km would fit across the state of Arizona, yet despite its small size Enceladus exhibits one of the most interesting surfaces of all the icy satellites. Enceladus reflects about 90% of the incident sunlight (about like fresh-fallen snow), placing it among the most reflective objects in the Solar System. Several geologic terrains have superposed crater densities that span a factor of at least 500, thereby indicating huge differences in the ages of these terrains. It is possible that the high reflectivity of Enceladus' surface results from continuous deposition of icy particles from Saturn's E-ring, which in fact may originate from icy volcanoes on Enceladus' surface. Some terrains are dominated by sinuous mountain ridges from 1 to 2 km high (3300 to 6600 feet), whereas other terrains are scarred by linear cracks, some of which show evidence for possible sideways fault motion such as that of California's infamous San Andreas fault. Some terrains appear to have formed by separation of icy plates along cracks, and other terrains are exceedingly smooth at the resolution of this image. The implication carried by Enceladus' surface is that this tiny ice ball has been geologically active and perhaps partially liquid in its interior for much of its history. The heat engine that powers geologic activity here is thought to be elastic deformation caused by tides induced by Enceladus' orbital motion around Saturn and the motion of another moon, Dione.

  1. Mars Color Imager (MARCI) on the Mars Climate Orbiter

    USGS Publications Warehouse

    Malin, M.C.; Bell, J.F., III; Calvin, W.; Clancy, R.T.; Haberle, R.M.; James, P.B.; Lee, S.W.; Thomas, P.C.; Caplinger, M.A.

    2001-01-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (???6 x 6 x 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 1/3 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6?? FOV, which covers ???40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals. Copyright 2001 by the American

  2. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  3. Spectral images browsing using principal component analysis and set partitioning in hierarchical tree

    NASA Astrophysics Data System (ADS)

    Ma, Long; Zhao, Deping

    2011-12-01

    Spectral imaging technology have been used mostly in remote sensing, but have recently been extended to new area requiring high fidelity color reproductions like telemedicine, e-commerce, etc. These spectral imaging systems are important because they offer improved color reproduction quality not only for a standard observer under a particular illuminantion, but for any other individual exhibiting normal color vision capability under another illuminantion. A possibility for browsing of the archives is needed. In this paper, the authors present a new spectral image browsing architecture. The architecture for browsing is expressed as follow: (1) The spectral domain of the spectral image is reduced with the PCA transform. As a result of the PCA transform the eigenvectors and the eigenimages are obtained. (2) We quantize the eigenimages with the original bit depth of spectral image (e.g. if spectral image is originally 8bit, then quantize eigenimage to 8bit), and use 32bit floating numbers for the eigenvectors. (3) The first eigenimage is lossless compressed by JPEG-LS, the other eigenimages were lossy compressed by wavelet based SPIHT algorithm. For experimental evalution, the following measures were used. We used PSNR as the measurement for spectral accuracy. And for the evaluation of color reproducibility, ΔE was used.here standard D65 was used as a light source. To test the proposed method, we used FOREST and CORAL spectral image databases contrain 12 and 10 spectral images, respectively. The images were acquired in the range of 403-696nm. The size of the images were 128*128, the number of bands was 40 and the resolution was 8 bits per sample. Our experiments show the proposed compression method is suitable for browsing, i.e., for visual purpose.

  4. Iterative color constancy with temporal filtering for an image sequence with no relative motion between the camera and the scene.

    PubMed

    Simão, Josemar; Jörg Andreas Schneebeli, Hans; Vassallo, Raquel Frizera

    2015-11-01

    Color constancy is the ability to perceive the color of a surface as invariant even under changing illumination. In outdoor applications, such as mobile robot navigation or surveillance, the lack of this ability harms the segmentation, tracking, and object recognition tasks. The main approaches for color constancy are generally targeted to static images and intend to estimate the scene illuminant color from the images. We present an iterative color constancy method with temporal filtering applied to image sequences in which reference colors are estimated from previous corrected images. Furthermore, two strategies to sample colors from the images are tested. The proposed method has been tested using image sequences with no relative movement between the scene and the camera. It also has been compared with known color constancy algorithms such as gray-world, max-RGB, and gray-edge. In most cases, the iterative color constancy method achieved better results than the other approaches. PMID:26560917

  5. Analyzing visual enjoyment of color: using female nude digital Image as example

    NASA Astrophysics Data System (ADS)

    Chin, Sin-Ho

    2014-04-01

    This research adopts three primary colors and their three mixed colors as main color hue variances by changing the background of a female nude digital image. The color saturation variation is selected to 9S as high saturation and 3S as low saturation of PCCS. And the color tone elements are adopted in 3.5 as low brightness, 5.5 as medium brightness for primary color, and 7.5 as low brightness. The water-color brush stroke used for two female body digital images which consisting of a visual pleasant image with elegant posture and another unpleasant image with stiff body language, is to add the visual intimacy. Results show the brightness of color is the main factor impacting visual enjoyment, followed by saturation. Explicitly, high-brightness with high saturation gains the highest rate of enjoyment, high-saturation medium brightness (primary color) the second, and high-brightness with low saturation the third, and low-brightness with low saturation the least.

  6. Improving the image discontinuous problem by using color temperature mapping method

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming

    2011-09-01

    This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.

  7. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  8. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2005-01-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  9. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2004-12-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  10. Color filter array patterns for small-pixel image sensors with substantial cross talk.

    PubMed

    Anzagira, Leo; Fossum, Eric R

    2015-01-01

    Digital image sensor outputs usually must be transformed to suit the human visual system. This color correction amplifies noise, thus reducing the signal-to-noise ratio (SNR) of the image. In subdiffraction-limit (SDL) pixels, where optical and carrier cross talk can be substantial, this problem can become significant when conventional color filter arrays (CFAs) such as the Bayer patterns (RGB and CMY) are used. We present the design and analysis of new color filter array patterns for improving the color error and SNR deterioration caused by cross talk in these SDL pixels. We demonstrate an improvement in the color reproduction accuracy and SNR in high cross-talk conditions. Finally, we investigate the trade-off between color accuracy and SNR for the different CFA patterns. PMID:26366487

  11. Color images of Kansas subsurface geology from well logs

    USGS Publications Warehouse

    Collins, D.R.; Doveton, J.H.

    1986-01-01

    Modern wireline log combinations give highly diagnostic information that goes beyond the basic shale content, pore volume, and fluid saturation of older logs. Pattern recognition of geology from logs is made conventionally through either the examination of log overlays or log crossplots. Both methods can be combined through the use of color as a medium of information by setting the three color primaries of blue, green, and red light as axes of three dimensional color space. Multiple log readings of zones are rendered as composite color mixtures which, when plotted sequentially with depth, show lithological successions in a striking manner. The method is extremely simple to program and display on a color monitor. Illustrative examples are described from the Kansas subsurface. ?? 1986.

  12. Rapid production of structural color images with optical data storage capabilities

    NASA Astrophysics Data System (ADS)

    Rezaei, Mohamad; Jiang, Hao; Qarehbaghi, Reza; Naghshineh, Mohammad; Kaminska, Bozena

    2015-03-01

    In this paper, we present novel methods to produce structural color image for any given color picture using a pixelated generic stamp named nanosubstrate. The nanosubstrate is composed of prefabricated arrays of red, green and blue subpixels. Each subpixel has nano-gratings and/or sub-wavelength structures which give structural colors through light diffraction. Micro-patterning techniques were implemented to produce the color images from the nanosubstrate by selective activation of subpixels. The nano-grating structures can be nanohole arrays, which after replication are converted to nanopillar arrays or vice versa. It has been demonstrated that visible and invisible data can be easily stored using these fabrication methods and the information can be easily read. Therefore the techniques can be employed to produce personalized and customized color images for applications in optical document security and publicity, and can also be complemented by combined optical data storage capabilities.

  13. Modeling human performance with low light sparse color imagers

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Reynolds, Joseph P.; Cha, Jae; Hodgkin, Van

    2011-05-01

    Reflective band sensors are often signal to noise limited in low light conditions. Any additional filtering to obtain spectral information further reduces the signal to noise, greatly affecting range performance. Modern sensors, such as the sparse color filter CCD, circumvent this additional degradation through reducing the number of pixels affected by filters and distributing the color information. As color sensors become more prevalent in the warfighter arsenal, the performance of the sensor-soldier system must be quantified. While field performance testing ultimately validates the success of a sensor, accurately modeling sensor performance greatly reduces the development time and cost, allowing the best technology to reach the soldier the fastest. Modeling of sensors requires accounting for how the signal is affected through the modulation transfer function (MTF) and noise of the system. For the modeling of these new sensors, the MTF and noise for each color band must be characterized, and the appropriate sampling and blur must be applied. We show how sparse array color filter sensors may be modeled and how a soldier's performance with such a sensor may be predicted. This general approach to modeling color sensors can be extended to incorporate all types of low light color sensors.

  14. True color blood flow imaging using a high-speed laser photography system

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi

    2012-10-01

    Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.

  15. Use of ultrasound, color Doppler imaging and radiography to monitor periapical healing after endodontic surgery.

    PubMed

    Tikku, Aseem P; Kumar, Sunil; Loomba, Kapil; Chandra, Anil; Verma, Promila; Aggarwal, Renu

    2010-09-01

    This study evaluated the effectiveness of ultrasound, color Doppler imaging and conventional radiography in monitoring the post-surgical healing of periapical lesions of endodontic origin. Fifteen patients who underwent periapical surgery for endodontic pathology were randomly selected. In all patients, periapical lesions were evaluated preoperatively using ultrasound, color Doppler imaging and conventional radiography, to analyze characteristics such as size, shape and dimensions. On radiographic evaluation, dimensions were measured in the superoinferior and mesiodistal direction using image-analysis software. Ultrasound evaluation was used to measure the changes in shape and dimensions on the anteroposterior, superoinferior, and mesiodistal planes. Color Doppler imaging was used to detect the blood-flow velocity. Postoperative healing was monitored in all patients at 1 week and 6 months by using ultrasound and color Doppler imaging, together with conventional radiography. The findings were then analyzed to evaluate the effectiveness of the 3 imaging techniques. At 6 months, ultrasound and color Doppler imaging were significantly better than conventional radiography in detecting changes in the healing of hard tissue at the surgical site (P < 0.004). This study demonstrates that ultrasound and color Doppler imaging have the potential to supplement conventional radiography in monitoring the post-surgical healing of periapical lesions of endodontic origin. PMID:20881334

  16. A color image quality assessment using a reduced-reference image machine learning expert

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Lebrun, Gilles; Lezoray, Olivier

    2008-01-01

    A quality metric based on a classification process is introduced. The main idea of the proposed method is to avoid the error pooling step of many factors (in frequential and spatial domain) commonly applied to obtain a final quality score. A classification process based on final quality class with respect to the standard quality scale provided by the UIT. Thus, for each degraded color image, a feature vector is computed including several Human Visual System characteristics, such as, contrast masking effect, color correlation, and so on. Selected features are of two kinds: 1) full-reference features and 2) no-reference characteristics. That way, a machine learning expert, providing a final class number is designed.

  17. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    PubMed Central

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A.; Wong, Alexander

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging. PMID:27346434

  18. Joint demosaicking and integer-ratio downsampling algorithm for color filter array image

    NASA Astrophysics Data System (ADS)

    Lee, Sangyoon; Kang, Moon Gi

    2015-03-01

    This paper presents a joint demosacking and integer-ratio downsampling algorithm for color filter array (CFA) images. Color demosaicking is a necessary part of image signal processing to obtain full color image for digital image recording system using single sensor. Also, such as mobile devices, the obtained image from sensor has to be downsampled to be display because the resolution of display is smaller than that of image. The conventional method is "Demosaicking first and downsampling later". However, this procedure requires a significant hardware resources and computational cost. In this paper, we proposed a method in which demosaicking and downsampling are working simultaneously. We analyze the Bayer CFA image in frequency domain, and then joint demosaicking and downsampling with integer-ratio scheme based on signal decomposition of luma and chrominance components. Experimental results show that the proposed method produces the high quality performance with much lower com putational cost and less hardware resources.

  19. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling.

    PubMed

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A; Wong, Alexander

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging. PMID:27346434

  20. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  1. Floating full-color image with computer-generated alcove rainbow hologram

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2014-02-01

    We have investigated the floating full color image display with the computer-generated hologram (CGH). The floating image, when utilized as a 3D display, gives strong impression to the viewer. In our previous study, to change the CGH shape from the flat type to the half cylindrical type, the floating image from the output CGH has the nearly 180 degrees viewing angle. However, since the previous CGH does not have wavelength-selectivity, reconstructed image only has a single color. Also, the huge calculation amount of the fringe pattern is big problem. Therefore, we now propose the rainbow-type computer generated alcove hologram. To decrease the calculation amount, the rainbow hologram sacrifices the vertical parallax. Also, this hologram can reconstruct an image with white light. Compared with the previous study of the Fresnel type, the calculation speed becomes 165 times faster. After calculation, we print this hologram with a fringe printer, and evaluate reconstructed floating full color images. In this study, we introduce the computer-generated rainbow hologram into the floating image display. The rainbow hologram can reconstruct full color image with white light illumination. It can be recorded by using a horizontal slit to limit the vertical parallax. Therefore, the slit changes into the half cylindrical slit, the wide viewing angle floating image display can reconstruct full color image.

  2. Empirical comparison of color normalization methods for epithelial-stromal classification in H and E images

    PubMed Central

    Sethi, Amit; Sha, Lingdao; Vahadane, Abhishek Ramnath; Deaton, Ryan J.; Kumar, Neeraj; Macias, Virgilia; Gann, Peter H.

    2016-01-01

    Context: Color normalization techniques for histology have not been empirically tested for their utility for computational pathology pipelines. Aims: We compared two contemporary techniques for achieving a common intermediate goal – epithelial-stromal classification. Settings and Design: Expert-annotated regions of epithelium and stroma were treated as ground truth for comparing classifiers on original and color-normalized images. Materials and Methods: Epithelial and stromal regions were annotated on thirty diverse-appearing H and E stained prostate cancer tissue microarray cores. Corresponding sets of thirty images each were generated using the two color normalization techniques. Color metrics were compared for original and color-normalized images. Separate epithelial-stromal classifiers were trained and compared on test images. Main analyses were conducted using a multiresolution segmentation (MRS) approach; comparative analyses using two other classification approaches (convolutional neural network [CNN], Wndchrm) were also performed. Statistical Analysis: For the main MRS method, which relied on classification of super-pixels, the number of variables used was reduced using backward elimination without compromising accuracy, and test - area under the curves (AUCs) were compared for original and normalized images. For CNN and Wndchrm, pixel classification test-AUCs were compared. Results: Khan method reduced color saturation while Vahadane reduced hue variance. Super-pixel-level test-AUC for MRS was 0.010–0.025 (95% confidence interval limits ± 0.004) higher for the two normalized image sets compared to the original in the 10–80 variable range. Improvement in pixel classification accuracy was also observed for CNN and Wndchrm for color-normalized images. Conclusions: Color normalization can give a small incremental benefit when a super-pixel-based classification method is used with features that perform implicit color normalization while the gain is

  3. Wide-field computational color imaging using pixel super-resolved on-chip microscopy

    PubMed Central

    Greenbaum, Alon; Feizi, Alborz; Akbari, Najva; Ozcan, Aydogan

    2013-01-01

    Lens-free holographic on-chip imaging is an emerging approach that offers both wide field-of-view (FOV) and high spatial resolution in a cost-effective and compact design using source shifting based pixel super-resolution. However, color imaging has remained relatively immature for lens-free on-chip imaging, since a ‘rainbow’ like color artifact appears in reconstructed holographic images. To provide a solution for pixel super-resolved color imaging on a chip, here we introduce and compare the performances of two computational methods based on (1) YUV color space averaging, and (2) Dijkstra’s shortest path, both of which eliminate color artifacts in reconstructed images, without compromising the spatial resolution or the wide FOV of lens-free on-chip microscopes. To demonstrate the potential of this lens-free color microscope we imaged stained Papanicolaou (Pap) smears over a wide FOV of ~14 mm2 with sub-micron spatial resolution. PMID:23736466

  4. HST Imaging of the Globular Clusters in the Formax Cluster: Color and Luminosity Distributions

    NASA Technical Reports Server (NTRS)

    Grillmair, C. J.; Forbes, D. A.; Brodie, J.; Elson, R.

    1998-01-01

    We examine the luminosity and B - I color distribution of globular clusters for three early-type galaxies in the Fornax cluster using imaging data from the Wide Field/Planetary Camera 2 on the Hubble Space Telescope.

  5. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2016-07-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  6. False color image of a portion of the Hammersley Mountains in Australia

    NASA Technical Reports Server (NTRS)

    1981-01-01

    False color image of a portion of the Hammersley Mountains in Western Australia was processed from data acquired by JPL's Shuttle Imaging Radar-A (SIR-A) when it flew aboard STS-2. Color processing of SIR-A data is used to separate variations in topography. Red areas represent very rough mountain terrain; pink is less rugged; yellow is textured; green is desert like territory, and blue represents smooth areas, like a dry lakebed. Finer details appear as thin lines.

  7. MRO Mars Color Imager (MARCI) Investigation Primary Mission Results

    NASA Astrophysics Data System (ADS)

    Edgett, K. S.; Cantor, B. A.; Malin, M. C.; Science; Operations Teams, M.

    2008-12-01

    The Mars Reconnaissance Orbiter (MRO) Mars Color Imager (MARCI) investigation was designed to recover the wide angle camera science objectives of the Mars Climate Orbiter MARCI which was destroyed upon arrival at Mars in 1999 and extend the daily meteorological coverage of the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle investigation that was systematically conducted from March 1999 to October 2006. MARCI consists of two wide angle cameras, each with a 180° field of view. The first acquires data in 5 visible wavelength channels (420, 550, 600, 650, 720 nm), the second in 2 UV channels (260, 320 nm). Data have been acquired daily, except during spacecraft upsets, since 24 September 2006. From the MRO 250 to 315 km altitude orbit, inclined 93 degrees, visible wavelength images usually have a pixel scale of about 1 km at nadir and the UV data are at about 8 km per pixel. Data are obtained during every orbit on the day side of the planet from terminator to terminator. These provide a nearly continuous record of meteorological events and changes in surface frost and albedo patterns that span more than 1 martian year and extend the daily global record of such events documented by the MGS MOC. For a few weeks in September and October 2006, both camera systems operated simultaneously, providing views of weather events at about 1400 local time (MOC) and an hour later at about 1500 (MARCI). The continuous meteorological record, now spanning more than 5 Mars years, shows very repeatable weather from year to year with cloud and dust-raising events occurring in the same regions within about 2 weeks of their prior occurrence in previous years. This provides a measure of predictability ideal for assessing future landing sites, orbiter aerobraking plans, and conditions to be encountered by the current landed spacecraft on Mars. However, less predictable are planet-encircling dust events. MOC observed one in 2001, the next was observed by MARCI in 2007. These

  8. SWT voting-based color reduction for text detection in natural scene images

    NASA Astrophysics Data System (ADS)

    Ikica, Andrej; Peer, Peter

    2013-12-01

    In this article, we propose a novel stroke width transform (SWT) voting-based color reduction method for detecting text in natural scene images. Unlike other text detection approaches that mostly rely on either text structure or color, the proposed method combines both by supervising text-oriented color reduction process with additional SWT information. SWT pixels mapped to color space vote in favor of the color they correspond to. Colors receiving high SWT vote most likely belong to text areas and are blocked from being mean-shifted away. Literature does not explicitly address SWT search direction issue; thus, we propose an adaptive sub-block method for determining correct SWT direction. Both SWT voting-based color reduction and SWT direction determination methods are evaluated on binary (text/non-text) images obtained from a challenging Computer Vision Lab optical character recognition database. SWT voting-based color reduction method outperforms the state-of-the-art text-oriented color reduction approach.

  9. Color enhancement of highly correlated images. I - Decorrelation and HSI contrast stretches. [hue saturation intensity

    NASA Technical Reports Server (NTRS)

    Gillespie, Alan R.; Kahle, Anne B.; Walker, Richard E.

    1986-01-01

    Conventional enhancements for the color display of multispectral images are based on independent contrast modifications or 'stretches' of three input images. This approach is not effective if the image channels are highly correlated or if the image histograms are strongly bimodal or more complex. Any of several procedures that tend to 'stretch' color saturation while leaving hue unchanged may better utilize the full range of colors for the display of image information. Two conceptually different enhancements are discussed: the 'decorrelation stretch', based on principal-component (PC) analysis, and the 'stretch' of 'hue' - 'saturation' - intensity (HSI) transformed data. The PC transformation in scene-dependent, but the HSI transformation is invariant. Examples of images enhanced by conventional linear stretches, decorrelation stretch, and by stretches of HSI transformed data are compared. Schematic variation diagrams or two- and three-dimensional histograms are used to illustrate the 'decorrelation stretch' method and the effect of the different enhancements.

  10. Color filters including infrared cut-off integrated on CMOS image sensor.

    PubMed

    Frey, Laurent; Parrein, Pascale; Raby, Jacques; Pellé, Catherine; Hérault, Didier; Marty, Michel; Michailos, Jean

    2011-07-01

    A color image was taken with a CMOS image sensor without any infrared cut-off filter, using red, green and blue metal/dielectric filters arranged in Bayer pattern with 1.75 µm pixel pitch. The three colors were obtained by a thickness variation of only two layers in the 7-layer stack, with a technological process including four photolithography levels. The thickness of the filter stack was only half of the traditional color resists, potentially enabling a reduction of optical crosstalk for smaller pixels. Both color errors and signal to noise ratio derived from optimized spectral responses are expected to be similar to color resists associated with infrared filter. PMID:21747459

  11. Spatial distribution of jovian clouds, hazes and colors from Cassini ISS multi-spectral images

    NASA Astrophysics Data System (ADS)

    Ordonez-Etxeberria, I.; Hueso, R.; Sánchez-Lavega, A.; Pérez-Hoyos, S.

    2016-03-01

    The Cassini spacecraft made a gravity assist flyby of Jupiter in December 2000. The Imaging Science Subsystem (ISS) acquired images of the planet that covered the visual range with filters sensitive to the distribution of clouds and hazes, their altitudes and color. We use a selection of these images to build high-resolution cylindrical maps of the planet in 9 wavelengths. We explore the spatial distribution of the planet reflectivity examining the distribution of color and altitudes of hazes as well as their relation. A variety of analyses is presented: (a) Principal Component Analysis (PCA); (b) color-altitude indices; and (c) chromaticity diagrams (for a quantitative characterization of Jupiter "true" colors as they would be perceived by a human observer). PCA of the full dataset indicates that six components are required to explain the data. These components are likely related to the distribution of cloud opacity at the main cloud, the distribution of two types of hazes, two chromophores or coloring processes and the distribution of convective storms. While the distribution of a single chromophore can explain most of the color variations in the atmosphere, a second coloring agent is required to explain the brownish cyclones in the North Equatorial Belt (NEB). This second colorant could be caused by a different chromophore or by the same chromophore located in structures deeper in the atmosphere. Color indices separate different dynamical regions where cloud color and altitude are correlated from those where they are not. The Great Red Spot (GRS) appears as a well separated region in terms of its position in a global color-altitude scatter diagram and different families of vortices are examined, including the red cyclones which are located deeper in the atmosphere. Finally, a chromaticity diagram of Jupiter nearly true color images quantifies the color variations in Jupiter's clouds from the perspective of a visual observer and helps to quantify how different

  12. Dual-tree complex wavelet transform applied on color descriptors for remote-sensed images retrieval

    NASA Astrophysics Data System (ADS)

    Sebai, Houria; Kourgli, Assia; Serir, Amina

    2015-01-01

    This paper highlights color component features that improve high-resolution satellite (HRS) images retrieval. Color component correlation across image lines and columns is used to define a revised color space. It is designed to simultaneously take both color and neighborhood information. From this space, color descriptors, namely rotation invariant uniform local binary pattern, histogram of gradient, and a modified version of local variance are derived through dual-tree complex wavelet transform (DT-CWT). A new color descriptor called smoothed local variance (SLV) using an edge-preserving smoothing filter is introduced. It is intended to offer an efficient way to represent texture/structure information using an invariant to rotation descriptor. This descriptor takes advantage of DT-CWT representation to enhance the retrieval performance of HRS images. We report an evaluation of the SLV descriptor associated with the new color space using different similarity distances in our content-based image retrieval scheme. We also perform comparison with some standard features. Experimental results show that SLV descriptor allied to DT-CWT representation outperforms the other approaches.

  13. Seed viability detection using computerized false-color radiographic image enhancement

    NASA Technical Reports Server (NTRS)

    Vozzo, J. A.; Marko, Michael

    1994-01-01

    Seed radiographs are divided into density zones which are related to seed germination. The seeds which germinate have densities relating to false-color red. In turn, a seed sorter may be designed which rejects those seeds not having sufficient red to activate a gate along a moving belt containing the seed source. This results in separating only seeds with the preselected densities representing biological viability lending to germination. These selected seeds demand a higher market value. Actual false-coloring isn't required for a computer to distinguish the significant gray-zone range. This range can be predetermined and screened without the necessity of red imaging. Applying false-color enhancement is a means of emphasizing differences in densities of gray within any subject from photographic, radiographic, or video imaging. Within the 0-255 range of gray levels, colors can be assigned to any single level or group of gray levels. Densitometric values then become easily recognized colors which relate to the image density. Choosing a color to identify any given density allows separation by morphology or composition (form or function). Additionally, relative areas of each color are readily available for determining distribution of that density by comparison with other densities within the image.

  14. A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping

    NASA Astrophysics Data System (ADS)

    Saad, Ashraf A.; Shapiro, Linda G.

    2008-03-01

    Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.

  15. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  16. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    PubMed

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise. PMID:24663832

  17. Color-to-Grayscale: Does the Method Matter in Image Recognition?

    PubMed Central

    Kanan, Christopher; Cottrell, Garrison W.

    2012-01-01

    In image recognition it is often assumed the method used to convert color images to grayscale has little impact on recognition performance. We compare thirteen different grayscale algorithms with four types of image descriptors and demonstrate that this assumption is wrong: not all color-to-grayscale algorithms work equally well, even when using descriptors that are robust to changes in illumination. These methods are tested using a modern descriptor-based image recognition framework, on face, object, and texture datasets, with relatively few training instances. We identify a simple method that generally works best for face and object recognition, and two that work well for recognizing textures. PMID:22253768

  18. Digital discrimination of neutrons and γ rays with organic scintillation detectors in an 8-bit sampling system using frequency gradient analysis

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Luo, Xiao-Liang; Liu, Guo-Fu; Lin, Cun-Bao; Wang, Yan-Ling; Hu, Qing-Qing; Peng, Jin-Xian

    2012-06-01

    The feasibility of using frequency gradient analysis (FGA), a digital method based on Fourier transform, to discriminate neutrons and γ rays in the environment of an 8-bit sampling system has been investigated. The performances of most pulse shape discrimination methods in a scintillation detection system using the time-domain features of the photomultiplier tube anode signal will be lower or non-effective in this low resolution sampling system. However, the FGA method using the frequency-domain features of the anode signal exhibits a strong insensitivity to noise and can be used to discriminate neutrons and γ rays in the above sampling system. A detailed study of the quality of the FGA method in BC501A liquid scintillators is presented using a 5 G samples/s 8-bit oscilloscope and a 14.1 MeV neutron generator. A comparison of the discrimination results of the time-of-flight and conventional charge comparison (CC) methods proves the applicability of this technique. Moreover, FGA has the potential to be implemented in current embedded electronics systems to provide real-time discrimination in standalone instruments.

  19. Quaternion higher-order spectra and their invariants for color image recognition

    NASA Astrophysics Data System (ADS)

    Jia, Xiaoning; Yang, Hang; Ma, Siliang; Song, Dongzhe

    2014-06-01

    This paper describes an invariants generation method for color images, which could be a useful tool in color object recognition tasks. First, by using the algebra of quaternions, we introduce the definition of quaternion higher-order spectra (QHOS) in the spatial domain and derive its equivalent form in the frequency domain. Then, QHOS invariants with respect to rotation, translation, and scaling transformations for color images are constructed using the central slice theorem and quaternion bispectral analysis. The feature data are further reduced to a smaller set using quaternion principal component analysis. The proposed method can deal with color images in a holistic manner, and the constructed QHOS invariants are highly immune to background noise. Experimental results show that the extracted QHOS invariants form compact and isolated clusters, and that a simple minimum distance classifier can yield high recognition accuracy.

  20. Unsupervised color image segmentation using graph cuts with multi-components

    NASA Astrophysics Data System (ADS)

    Li, Lei; Jin, Lianghai; Song, Enmin; Dong, Zhuoli

    2013-10-01

    A novel unsupervised color image segmentation method based on graph cuts with multi-components is proposed, which finds an optimal segmentation of an image by regarding it as an energy minimization problem. First, L*a*b* color space is chosen as color feature, and the multi-scale quaternion Gabor filter is employed to extract texture feature of the given image. Then, the segmentation is formulated in terms of energy minimization with an iterative process based on graph cuts, and the connected regions in each segment are considered as the components of the segment in each iteration. In addition, canny edge detector combined with color gradient is used to remove weak edges in segmentation results with the proposed algorithm. In contrast to previous algorithms, our method could greatly reduce computational complexity during inference procedure by graph cuts. Experimental results demonstrate the promising performance of the proposed method.

  1. Fresnel domain double-phase encoding encryption of color image via ptychography

    NASA Astrophysics Data System (ADS)

    Qiao, Liang; Wang, Yali; Li, Tuo; Shi, Yishi

    2015-10-01

    In this paper, color image encryption combined with ptychography has been investigated. Ptychographic imaging possesses a remarkable advantage of simple optics architecture and complex amplitude of object can be reconstructed just by a series of diffraction intensity patterns via aperture movement. Traditional technique of three primary color synthesis is applied for encrypting color image. In order to reduce physical limitations, the encryption's algorithm is based on Fresnel transformation domain. It is illustrated that the proposed optical encryption scheme has well ability to recover the encrypted color plaintext and advances in security enhancement thanks to introducing ptychography, since light probe as key factor enlarges the key space. Finally, the encryption's immunity to noise and reconstruction impact from lateral offset of probe has been investigated.

  2. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  3. Fusion framework for color image retrieval based on bag-of-words model and color local Haar binary patterns

    NASA Astrophysics Data System (ADS)

    Li, Li; Feng, Lin; Yu, Laihang; Wu, Jun; Liu, Shenglan

    2016-03-01

    Recently, global and local features have demonstrated excellent performance in image retrieval. However, there are some problems in both of them: (1) Local features particularly describe the local textures or patterns. However, similar textures may confuse these local feature extraction methods and get irrelevant retrieval results. (2) Global features delineate overall feature distributions in images, and the retrieved results often appear alike but may be irrelevant. To address problems above, we propose a fusion framework through the combination of local and global features, and thus obtain higher retrieval precision for color image retrieval. Color local Haar binary patterns (CLHBP) and the bag-of-words (BoW) of local features are exploited to capture global and local information of images. The proposed fusion framework combines the ranking results of BoW and CLHBP through a graph-based fusion method. The average retrieval precision of the proposed fusion framework is 83.6% on the Corel-1000 database, and its average precision is 9.9% and 6.4% higher than BoW and CLHBP, respectively. Extensive experiments on different databases validate the feasibility of the proposed framework.

  4. The effect of exposure on MaxRGB color constancy

    NASA Astrophysics Data System (ADS)

    Funt, Brian; Shi, Lilong

    2010-02-01

    The performance of the MaxRGB illumination-estimation method for color constancy and automatic white balancing has been reported in the literature as being mediocre at best; however, MaxRGB has usually been tested on images of only 8-bits per channel. The question arises as to whether the method itself is inadequate, or rather whether it has simply been tested on data of inadequate dynamic range. To address this question, a database of sets of exposure-bracketed images was created. The image sets include exposures ranging from very underexposed to slightly overexposed. The color of the scene illumination was determined by taking an extra image of the scene containing 4 Gretag Macbeth mini Colorcheckers placed at an angle to one another. MaxRGB was then run on the images of increasing exposure. The results clearly show that its performance drops dramatically when the 14-bit exposure range of the Nikon D700 camera is exceeded, thereby resulting in clipping of high values. For those images exposed such that no clipping occurs, the median error in MaxRGB's estimate of the color of the scene illumination is found to be relatively small.

  5. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  6. Calibration View of Earth and the Moon by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The Earth and Moon images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to the Moon was about 1,440,000 kilometers (about 895,000 miles); the range to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This view combines a sequence of frames showing the passage of Earth and the Moon across the field of view of a single color band of the Mars Color Imager. As the spacecraft slewed to view the two objects, they passed through the camera's field of view. Earth has been saturated white in this image so that both Earth

  7. Comparative color space analysis of difference images from adjacent visible human slices for lossless compression

    NASA Astrophysics Data System (ADS)

    Thoma, George R.; Pipkin, Ryan; Mitra, Sunanda

    1997-10-01

    This paper reports the compression ratio performance of the RGB, YIQ, and HSV color plane models for the lossless coding of the National Library of Medicine's Visible Human (VH) color data set. In a previous study the correlation between adjacent VH slices was exploited using the RGB color plane model. The results of that study suggested an investigation into possible improvements using the other two color planes, and alternative differencing methods. YIQ and HSV, also know a HSI, both represent the image by separating the intensity from the color information, and we anticipated higher correlation between the intensity components of adjacent VH slices. However the compression ratio did not improve by the transformation from RGB into the other color plane models, since in order to maintain lossless performance, YIQ and HSV both require more bits to store each pixel. This increase in file size is not offset by the increase in compression due to the higher correlation of the intensity value, the best performance being achieved with the RGB color plane model. This study also explored three methods of differencing: average reference image, alternating reference image, and cascaded difference from single reference. The best method proved to be the first iteration of the cascaded difference from single reference. In this method, a single reference image is chosen, and the difference between it and its neighbor is calculated. Then the difference between the neighbor and its next neighbor is calculated. This method requires that all preceding images up to the reference image be reconstructed before the target image is available. The compression ratios obtained from this method are significantly better than the competing methods.

  8. Improving the visualization and detection of tissue folds in whole slide images through color enhancement

    PubMed Central

    Bautista, Pinky A.; Yagi, Yukako

    2010-01-01

    Objective: The objective of this paper is to improve the visualization and detection of tissue folds, which are prominent among tissue slides, from the pre-scan image of a whole slide image by introducing a color enhancement method that enables the differentiation between fold and non-fold image pixels. Method: The weighted difference between the color saturation and luminance of the image pixels is used as shifting factor to the original RGB color of the image. Results: Application of the enhancement method to hematoxylin and eosin (H&E) stained images improves the visualization of tissue folds regardless of the colorimetric variations in the images. Detection of tissue folds after application of the enhancement also improves but the presence of nuclei, which are also stained dark like the folds, was found to sometimes affect the detection accuracy. Conclusion: The presence of tissue artifacts could affect the quality of whole slide images, especially that whole slide scanners select the focus points from the pre-scan image wherein the artifacts are indistinguishable from real tissue area. We have a presented in this paper an enhancement scheme that improves the visualization and detection of tissue folds from pre-scan images. Since the method works on the simulated pre-scan images its integration to the actual whole slide imaging process should also be possible. PMID:21221170

  9. DTV color and image processing: past, present, and future

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

    2006-01-01

    The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

  10. Quantifying the Onset and Progression of Plant Senescence by Color Image Analysis for High Throughput Applications.

    PubMed

    Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J

    2016-01-01

    Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant's response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807

  11. Quantifying the Onset and Progression of Plant Senescence by Color Image Analysis for High Throughput Applications

    PubMed Central

    Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J.

    2016-01-01

    Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant’s response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807

  12. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    NASA Technical Reports Server (NTRS)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  13. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  14. Use of Sonicated Albumin (Infoson) to Enhance Arterial Spectral and Color Doppler Imaging

    SciTech Connect

    Abildgaard, Andreas; Egge, Tor S.; Klow, Nils-Einar; Jakobsen, Jarl A.

    1996-04-15

    Purpose: To examine the effect of an ultrasound contrast medium (UCM), Infoson, on Doppler examination of stenotic arteries. Methods: Stenoses were created in the common carotid artery of six piglets, and examined with spectral Doppler and color Doppler imaging during UCM infusion in the left ventricle. Results: UCM caused a mean increase in recorded maximal systolic and end-diastolic velocities of 5% and 6%, respectively, while blood flow remained constant. Increased spectral intensity with UCM was accompanied by spectral broadening. Reduction of spectral intensity by adjustment of Doppler gain counteracted the velocity effects and the spectral broadening. With color Doppler, UCM caused dose-dependent color artifacts outside the artery. Flow in narrow stenoses could be visualized with UCM. Conclusion: The effects of UCM on velocity measurements were slight, and were related to changes in spectral intensity. With color Doppler, UCM may facilitate flow detection, but color artifacts may interfere.

  15. Vehicle detection from high-resolution aerial images based on superpixel and color name features

    NASA Astrophysics Data System (ADS)

    Chen, Ziyi; Cao, Liujuan; Yu, Zang; Chen, Yiping; Wang, Cheng; Li, Jonathan

    2016-03-01

    Automatic vehicle detection from aerial images is emerging due to the strong demand of large-area traffic monitoring. In this paper, we present a novel framework for automatic vehicle detection from the aerial images. Through superpixel segmentation, we first segment the aerial images into homogeneous patches, which consist of the basic units during the detection to improve efficiency. By introducing the sparse representation into our method, powerful classification ability is achieved after the dictionary training. To effectively describe a patch, the Histogram of Oriented Gradient (HOG) is used. We further propose to integrate color information to enrich the feature representation by using the color name feature. The final feature consists of both HOG and color name based histogram, by which we get a strong descriptor of a patch. Experimental results demonstrate the effectiveness and robust performance of the proposed algorithm for vehicle detection from aerial images.

  16. Color image encryption using iterative phase retrieve process in quaternion Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Duan, Kuaikuai

    2015-02-01

    A single-channel color image encryption method is proposed based on iterative phase iterative process in quaternion Fourier transform domain. First, three components of the plain color image is confused respectively by using cat map. Second, the confused components are combined into a pure quaternion image, which is encode to the phase only function by using an iterative phase retrieval process. Finally, the phase only function is encrypted into the gray scale ciphertext with stationary white noise distribution based on the chaotic diffusion, which has camouflage property to some extent. The corresponding plain color image can be recovered from the ciphertext only with correct keys in the decryption process. Simulation results verify the feasibility and effectiveness of the proposed method.

  17. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  18. Toward fast color-shaded images of CAD/CAM geometry

    NASA Astrophysics Data System (ADS)

    Sabella, P.; Wozny, M. J.

    1983-11-01

    It is pointed out that the growing demand for advanced three-dimensional (3-D) geometric modeling capabilities in mechanical CAD/CAM (Computer-Aided-Design/Computer-Aided-Manufacturing) systems is related to the need to attack complex design problems more directly and automatically. The advantages are being recognized of having an effective human-computer interface for handling complex 3-D geometry with a color-shaded image capability, in addition to a highly interactive line-drawing capability. A description is given of a software processor for rendering high-quality, color-shaded 'snapshots' directly and rapidly from commercial 3-D CAD/CAM systems. Attention is given to the importance of color-shaded images, the software processor architecture, color and illumination attributes, geometry-defining data, display type associativities, viw definition, polygonal approximation, scan conversion, and the Raster 4K rendering algorithm, and Raster4K performance.

  19. A fast color image enhancement algorithm based on Max Intensity Channel.

    PubMed

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-30

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395

  20. A fast color image enhancement algorithm based on Max Intensity Channel

    PubMed Central

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-01-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395

  1. Design and performance of charge multiplying color FIT CCD image sensor

    NASA Astrophysics Data System (ADS)

    Kobayashi, Izumi; Shibuya, Hiroaki; Tachibana, Toshio; Nishiwaki, Takahiro; Kashima, Shunji; Hynecek, Jaroslav

    2005-03-01

    The paper describes important design features and resulting performance of a color VGA format Frame Interline Transfer CCD image sensor that utilizes Charge Carrier Multiplication for increased sensitivity and low noise. The description includes the details of the photo site design that is formed by a pinned photodiode with a lateral anti-blooming drain. The design details of the photo-site transfer gate region are also given together with the design details of the vertical CCD register that result in a fast charge transfer into the memory and thus low smear. Since the device is not using the vertical overflow anti-blooming drain for the booming control, the near IR performance is not reduced. The color sensing capability is achieved by employing either RGB or complementary color filters. The remaining focus of the article is on the typical characterization results such as the CTE, image lag, and low dark current. The NIR and color imaging performance at low light levels is investigated in detail and characterized. In conclusion several typical scene color images taken by the camera that uses the developed charge-multiplying FIT CCD image sensor are shown.

  2. Natural-Color-Image Map of Quadrangle 3568, Polekhomri (503) and Charikar (504) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  3. Natural-Color-Image Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  4. Classification and quantification of suspended dust from steel plants by using color and transmission image analysis

    NASA Astrophysics Data System (ADS)

    Umegaki, Yoshiyuki; Kazama, Akira; Fukuda, Yoshinori

    2014-09-01

    Some kind of dust can arise from ironmaking and steelmaking processes in steel works. In JFE Steel's steel plants, various measures to prevent the suspended dust from scattering to the surrounding area have been taken. To take effective preventive measures against the dust scattering, it's important to identify dust sources and scattering routes by much observation and analysis of the dust particles. Conventionally, dust particles were sampled at many observation points in and around JFE's plants and the amount of particles of each kind was measured visually through a microscope. In such a way, however, the operation is inefficient to measure many dust samples, and also the accuracy of the results depends on the operator. To achieve efficient, operator-independent measurement, a system that can classify and quantify the dust particles automatically has been developed [1]. The system extracts particles from color images of the dust and classifies the particles into three color types - black particles (coke, coal), red particles (iron ore, sintered ore) and white particles (slag, lime). These processes are done basically in the YCrCb color space, where colors are represented by luminance (Y) and chrominance (Cr and Cb). The YCrCb color space is more manageable than the RGB color space to distinguish the three color types. The thresholds for the classification are automatically set on the basis of the mean values of the luminance and chrominance in each image. This means there is no need to tune the thresholds to each image manually. This scheme makes the results independent of operators. Quick analysis is also realized because what the operators have to do is to capture the images of the dust and the analysis is fully automated. Classification results of the sampled particles by the developed system and the obtained statistics in terms of the color type, approach direction and diameter are shown.

  5. Color vision test

    MedlinePlus

    ... from birth) color vision problems: Achromatopsia -- complete color blindness , seeing only shades of gray Deuteranopia -- difficulty telling ... test -- color; Ishihara color vision test Images Color blindness tests References Adams AJ, Verdon WA, Spivey BE. ...

  6. High Dynamic Range Image rendering of color in chameleons' camouflage using optical thin films

    NASA Astrophysics Data System (ADS)

    Prusten, Mark

    2008-08-01

    High Dynamic Range Image (HDRI) rendering and animation of color in the camouflage of chameleons is developed utilizing thin film optics. Chameleons are a lizard species, and have the ability to change their skin color. This change in color is an expression of the physical and physiological conditions of the lizard, and plays a part in communication. The different colors that can be produced depending on the species include pink, blue, red, orange, green, black, brown and yellow. The modeling, simulation, and rendering of the color, which their skin incorporates, thin film optical stacks. The skin of a chameleon has four layers, which together produce various colors. The outside transparent layer has chromatophores cells, of two kinds of color, yellow and red. Next there are two more layers that reflect light: one blue and the other white. The innermost layer contains dark pigment granules or melanophore cells that influences the amount of reflected light. All of these pigment cells can rapidly relocate their pigments, thereby influencing the color of the chameleon. Techniques like subsurface scattering, the simulation of volumetric scattering of light underneath the objects surface, and final gathering are defined in custom shaders and material phenomena for the renderer. The workflow developed to model the chameleon's skin is also applied to simulation and rendering of hair and fur camouflage, which does not exist in nature.

  7. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  8. Image analysis and green tea color change kinetics during thin-layer drying.

    PubMed

    Shahabi, Mohammad; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinpour, Soleiman

    2014-09-01

    This study was conducted to investigate the effect of air temperature and air flow velocity on kinetics of color parameter changes during hot-air drying of green tea, to obtain the best model for hot-air drying of green tea, to apply a computer vision system and to study the color changes during drying. In the proposed computer vision system system, at first RGB values of the images were converted into XYZ values and then to Commission International d'Eclairage L*a*b* color coordinates. The obtained color parameters of L*, a* and b* were calibrated with Hunter-Lab colorimeter. These values were also used for calculation of the color difference, chroma, hue angle and browning index. The values of L* and b* decreased, while the values of a* and color difference (ΔE*ab ) increased during hot-air drying. Drying data were fitted to three kinetic models. Zero, first-order and fractional conversion models were utilized to describe the color changes of green tea. The suitability of fitness was determined using the coefficient of determination (R (2)) and root-mean-square error. Results showed that the fraction conversion model had more acceptable fitness than the other two models in most of color parameters. PMID:23751546

  9. The characteristics of three-dimensional skin imaging system by full-colored optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yang, Bor-Wen; Chan, Li-Ming; Wang, Kai-Cheng

    2009-05-01

    In the present cosmetic market, the skin image obtained from a hand-held camera is two-dimensional (2-D). Due to insufficient penetration, only the skin surface can be detected, and thus phenomena in the dermis cannot be observed. To take the place of the conventional 2D camera, a new hand-held imaging system is proposed for three-dimensional (3-D) skin imaging. Featuring non-invasiveness, optical coherence tomography (OCT) has become one of the popular medical imaging techniques. The dermal images shown in OCT-related reports were mainly single-colored because of the use of a monotonic light source. With three original-colored beams applied in OCT, a full-colored image can be derived for dermatology. The penetration depth of the system ranges from 0.43 to 0.78 mm, sufficient for imaging of main tissues in the dermis. Colorful and non-invasive perspectives of deep dermal structure help to advance skin science, dermatology and cosmetology.

  10. Estimating information from image colors: an application to digital cameras and natural scenes.

    PubMed

    Marín-Franch, Iván; Foster, David H

    2013-01-01

    The colors present in an image of a scene provide information about its constituent elements. But the amount of information depends on the imaging conditions and on how information is calculated. This work had two aims. The first was to derive explicitly estimators of the information available and the information retrieved from the color values at each point in images of a scene under different illuminations. The second was to apply these estimators to simulations of images obtained with five sets of sensors used in digital cameras and with the cone photoreceptors of the human eye. Estimates were obtained for 50 hyperspectral images of natural scenes under daylight illuminants with correlated color temperatures 4,000, 6,500, and 25,000 K. Depending on the sensor set, the mean estimated information available across images with the largest illumination difference varied from 15.5 to 18.0 bits and the mean estimated information retrieved after optimal linear processing varied from 13.2 to 15.5 bits (each about 85 percent of the corresponding information available). With the best sensor set, 390 percent more points could be identified per scene than with the worst. Capturing scene information from image colors depends crucially on the choice of camera sensors. PMID:22450817

  11. The implementation of thermal image visualization by HDL based on pseudo-color

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Zhang, JiangLing

    2004-11-01

    The pseudo-color method which maps the sampled data to intuitive perception colors is a kind of powerful visualization way. And the all-around system of pseudo-color visualization, which includes the primary principle, model and HDL (Hardware Description Language) implementation for the thermal images, is expatiated on in the paper. The thermal images whose signal is modulated as video reflect the temperature distribution of measured object, so they have the speciality of mass and real-time. The solution to the intractable problem is as follows: First, the reasonable system, i.e. the combining of global pseudo-color visualization and local special area accurate measure, muse be adopted. Then, the HDL pseudo-color algorithms in SoC (System on Chip) carry out the system to ensure the real-time. Finally, the key HDL algorithms for direct gray levels connection coding, proportional gray levels map coding and enhanced gray levels map coding are presented, and its simulation results are showed. The pseudo-color visualization of thermal images implemented by HDL in the paper has effective application in the aspect of electric power equipment test and medical health diagnosis.

  12. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data. PMID:23938797

  13. Comparison of Drusen Area Detected by Spectral Domain Optical Coherence Tomography and Color Fundus Imaging

    PubMed Central

    Yehoshua, Zohar; Gregori, Giovanni; Sadda, SriniVas R.; Penha, Fernando M.; Goldhardt, Raquel; Nittala, Muneeswar G.; Konduru, Ranjith K.; Feuer, William J.; Gupta, Pooja; Li, Ying; Rosenfeld, Philip J.

    2013-01-01

    Purpose. To compare the measurements of drusen area from manual segmentation of color fundus photographs with those generated by an automated algorithm designed to detect elevations of the retinal pigment epithelium (RPE) on spectral domain optical coherence tomography (SD-OCT) images. Methods. Fifty eyes with drusen secondary to nonexudative age-related macular degeneration were enrolled. All eyes were imaged with a high-definition OCT instrument using a 200 × 200 A-scan raster pattern covering a 6 mm × 6 mm area centered on the fovea. Digital color fundus images were taken on the same day. Drusen were traced manually on the fundus photos by graders at the Doheny Image Reading Center, whereas quantitative OCT measurements of drusen were obtained by using a fully automated algorithm. The color fundus images were registered to the OCT data set and measurements within corresponding 3- and 5-mm circles centered at the fovea were compared. Results. The mean areas (±SD [range]) for the 3-mm circles were SD-OCT = 1.57 (±1.08 [0.03–4.44]); 3-mm color fundus = 1.92 (±1.08 [0.20–3.95]); 5-mm SD-OCT = 2.12 (±1.55 [0.03–5.40]); and 5-mm color fundus = 3.38 (±1.90 [0.39–7.49]). The mean differences between color images and the SD-OCT (color − SD-OCT) were 0.36 (±0.93) (P = 0.008) for the 3-mm circle and 1.26 (±1.38) (P < 0.001) for the 5-mm circle measurements. Intraclass correlation coefficients of agreements for 3- and 5-mm measurements were 0.599 and 0.540, respectively. Conclusions. There was only fair agreement between drusen area measurements obtained from SD-OCT images and color fundus photos. Drusen area measurements on color fundus images were larger than those with SD-OCT scans. This difference can be attributed to the fact that the OCT algorithm defines drusen in terms of RPE deformations above a certain threshold, and will not include small, flat drusen and subretinal drusenoid deposits. The two approaches provide complementary information about

  14. Digital separation of diaminobenzidine-stained tissues via an automatic color-filtering for immunohistochemical quantification.

    PubMed

    Fu, Rong; Ma, Xiaomian; Bian, Zhaoying; Ma, Jianhua

    2015-02-01

    The digital separation of diaminobenzidine (DAB)-stained tissues from hematoxylin background is an important pre-processing step to analyze immunostains. In most stain separation methods, specific color channels (for example: RGB, HSI, CMYK) or color deconvolution matrices are used to obtain different tissue contrasts between DAB- and hematoxylin-stained areas. However, these methods could produce incomplete separation or color changes because the color spectra of stains and co-localized stains overlap in histological images. Therefore, we proposed an automatic color-filtering to separate hematoxylin- and DAB-stained tissues. In implantation, the RGB images of DAB-labeled immunostains are first converted to 8-bit BN images by a mathematical translation to produce the largest contrast between brown DAB-stained tissues and blue hematoxylin-stained tissues. The first valley in the histogram revised by nonuniform quantization is set as the cut-off point to obtain a brown filter. DAB-stained tissues are accurately delineated from the background counterstain, resulting in DAB-only-image and De-DAB-image. Subsequently, a blue filter is designed in the CIE-Lab color space to further delineate the hematoxylin-stained tissues from the De-DAB-image. Finally, the average values of the remaining pixels of the De-DAB-image are set as the background color of the DAB-only-image to manage uneven dyeing and provide DAB-stained-image for adaptive immunohistochemistry quantitation. Extensive experimental results demonstrated that the proposed method has significant advantages compared with existing methods in terms of complete stain separation without changing the color in DAB-stained areas. PMID:25780744

  15. An algorithm for image clusters detection and identification based on color for an autonomous mobile robot

    SciTech Connect

    Uy, D.L.

    1996-02-01

    An algorithm for detection and identification of image clusters or {open_quotes}blobs{close_quotes} based on color information for an autonomous mobile robot is developed. The input image data are first processed using a crisp color fuszzyfier, a binary smoothing filter, and a median filter. The processed image data is then inputed to the image clusters detection and identification program. The program employed the concept of {open_quotes}elastic rectangle{close_quotes}that stretches in such a way that the whole blob is finally enclosed in a rectangle. A C-program is develop to test the algorithm. The algorithm is tested only on image data of 8x8 sizes with different number of blobs in them. The algorithm works very in detecting and identifying image clusters.

  16. The design and implementation of image query system based on color feature

    NASA Astrophysics Data System (ADS)

    Yao, Xu-Dong; Jia, Da-Chun; Li, Lin

    2013-07-01

    ASP.NET technology was used to construct the B/S mode image query system. The theory and technology of database design, color feature extraction from image, index and retrieval in the construction of the image repository were researched. The campus LAN and WAN environment were used to test the system. From the test results, the needs of user queries about related resources were achieved by system architecture design.

  17. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  18. The World' first Geostationary Ocean Color Imager : Changing the Ocean Color Paradigm

    NASA Astrophysics Data System (ADS)

    Ryu, J.; Choi, J.; Ahn, Y.

    2011-12-01

    GOCI is the world's first ocean color observation satellite positioned at the geostationary orbit. It has been launched in June 2010 and is planned for use in real-time monitoring of the ocean environment around East Asia by daily analysis of ocean environment measurements of chlorophyll concentration, dissolved organic matter (DOM), and suspended sediment for seven years. Differently from the existing polar-orbit satellite, GOCI can get the data every one hour from 9:15 am to 4:15 pm around the Korean sea areas. The temporal resolution and high performance of the GOCI is very efficient to an ocean environmental monitoring as well as disasters such as red tide, sea ice, tsunami, oil spill, volcano eruption, macro-algae bloom, yellow dust, mountain fire, typhoon, ship movement at dumping site. GOCI primary data can support ocean environment monitoring, operational oceanographic system, fishery information service and climate change research. Operational oceanographic system is to provide data and information for ocean and coastal states changes to various users.

  19. Image-based separation of reflective and fluorescent components using illumination variant and invariant color.

    PubMed

    Zhang, Cherry; Sato, Imari

    2013-12-01

    Traditionally, researchers tend to exclude fluorescence from color appearance algorithms in computer vision and image processing because of its complexity. In reality, fluorescence is a very common phenomenon observed in many objects, from gems and corals, to different kinds of writing paper, and to our clothes. In this paper, we provide detailed theories of fluorescence phenomenon. In particular, we show that the color appearance of fluorescence is unaffected by illumination in which it differs from ordinary reflectance. Moreover, we show that the color appearance of objects with reflective and fluorescent components can be represented as a linear combination of the two components. A linear model allows us to separate the two components using images taken under unknown illuminants using independent component analysis (ICA). The effectiveness of the proposed method is demonstrated using digital images of various fluorescent objects. PMID:24136427

  20. Compression of color facial images using feature correction two-stage vector quantization.

    PubMed

    Huang, J; Wang, Y

    1999-01-01

    A feature correction two-stage vector quantization (FC2VQ) algorithm was previously developed to compress gray-scale photo identification (ID) pictures. This algorithm is extended to color images in this work. Three options are compared, which apply the FC2VQ algorithm in RGB, YCbCr, and Karhunen-Loeve transform (KLT) color spaces, respectively. The RGB-FC2VQ algorithm is found to yield better image quality than KLT-FC2VQ or YCbCr-FC2VQ at similar bit rates. With the RGB-FC2VQ algorithm, a 128 x 128 24-b color ID image (49,152 bytes) can be compressed down to about 500 bytes with satisfactory quality. When the codeword indices are further compressed losslessly using a first order Huffman coder, this size is further reduced to about 450 bytes. PMID:18262869

  1. Steganalysis for GIF images based on colors-gradient co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Wang, Hongxia

    2012-11-01

    A steganalysis algorithm based on colors-gradient co-occurrence matrix (CGCM) is proposed in this paper. CGCM is constructed with colors matrix and gradient matrix of the GIF image, and 27-dimensional statistical features of CGCM, which are sensitive to the color-correlation between adjacent pixels and the breaking of image texture, are extracted. Support vector machine (SVM) technique takes the 27-dimensional statistical features to detect hidden message in GIF images. Experimental results indicate that the proposed algorithm is more effective than Zhao's algorithm for several existing GIF steganographic algorithms and steganography tools, especially for multibit assignment (MBA) steganography and EzStego. Furthermore, the time efficiency of the proposed algorithm is much higher than Zhao's algorithm.

  2. Color image quality assessment with biologically inspired feature and machine learning

    NASA Astrophysics Data System (ADS)

    Deng, Cheng; Tao, Dacheng

    2010-07-01

    In this paper, we present a new no-reference quality assessment metric for color images by using biologically inspired features (BIFs) and machine learning. In this metric, we first adopt a biologically inspired model to mimic the visual cortex and represent a color image based on BIFs which unifies color units, intensity units and C1 units. Then, in order to reduce the complexity and benefit the classification, the high dimensional features are projected to a low dimensional representation with manifold learning. Finally, a multiclass classification process is performed on this new low dimensional representation of the image and the quality assessment is based on the learned classification result in order to respect the one of the human observers. Instead of computing a final note, our method classifies the quality according to the quality scale recommended by the ITU. The preliminary results show that the developed metric can achieve good quality evaluation performance.

  3. Fluorescent image classification by major color histograms and a neural network

    NASA Astrophysics Data System (ADS)

    Soriano, M.; Garcia, L.; Saloma, Caesar A.

    2001-02-01

    Efficient image classification of microscopic fluorescent spheres is demonstrated with a supervised backpropagation neural network (NN) that uses as inputs the major color histogram representation of the fluorescent image to be classified. Two techniques are tested for the major color search: (1) cluster mean (CM) and (2) Kohonen's self-organizing feature map (SOFM). The method is shown to have higher recognition rates than Swain and Ballard's Color Indexing by histogram intersection. Classification with SOFM-generated histograms as inputs to the classifier NN achieved the best recognition rate (90%) for cases of normal, scaled, defocused, photobleached, and combined images of AMCA (7-Amino-4-Methylcoumarin- 3-Acetic Acid) and FITC (Fluorescein Isothiocynate)-stained microspheres.

  4. Partitioning histopathological images: an integrated framework for supervised color-texture segmentation and cell splitting.

    PubMed

    Kong, Hui; Gurcan, Metin; Belkacem-Boussaid, Kamel

    2011-09-01

    For quantitative analysis of histopathological images, such as the lymphoma grading systems, quantification of features is usually carried out on single cells before categorizing them by classification algorithms. To this end, we propose an integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method. For the segmentation part, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For the splitting part, given a connected component of the segmentation map, we initially differentiate whether it is a touching-cell clump or a single nontouching cell. The differentiation is mainly based on the distance between the most likely radial-symmetry center and the geometrical center of the connected component. The boundaries of touching-cell clumps are smoothed out by Fourier shape descriptor before carrying out an iterative, concave-point and radial-symmetry based splitting algorithm. To test the validity, effectiveness and efficiency of the framework, it is applied to follicular lymphoma pathological images, which exhibit complex background and extracellular texture with nonuniform illumination condition. For comparison purposes, the results of the

  5. Optimizing Imaging Conditions for Demanding Multi-Color Super Resolution Localization Microscopy

    PubMed Central

    Nahidiazar, Leila; Agronskaia, Alexandra V.; Broertjes, Jorrit; van den Broek, Bram; Jalink, Kees

    2016-01-01

    Single Molecule Localization super-resolution Microscopy (SMLM) has become a powerful tool to study cellular architecture at the nanometer scale. In SMLM, single fluorophore labels are made to repeatedly switch on and off (“blink”), and their exact locations are determined by mathematically finding the centers of individual blinks. The image quality obtainable by SMLM critically depends on efficacy of blinking (brightness, fraction of molecules in the on-state) and on preparation longevity and labeling density. Recent work has identified several combinations of bright dyes and imaging buffers that work well together. Unfortunately, different dyes blink optimally in different imaging buffers, and acquisition of good quality 2- and 3-color images has therefore remained challenging. In this study we describe a new imaging buffer, OxEA, that supports 3-color imaging of the popular Alexa dyes. We also describe incremental improvements in preparation technique that significantly decrease lateral- and axial drift, as well as increase preparation longevity. We show that these improvements allow us to collect very large series of images from the same cell, enabling image stitching, extended 3D imaging as well as multi-color recording. PMID:27391487

  6. Optimizing Imaging Conditions for Demanding Multi-Color Super Resolution Localization Microscopy.

    PubMed

    Nahidiazar, Leila; Agronskaia, Alexandra V; Broertjes, Jorrit; van den Broek, Bram; Jalink, Kees

    2016-01-01

    Single Molecule Localization super-resolution Microscopy (SMLM) has become a powerful tool to study cellular architecture at the nanometer scale. In SMLM, single fluorophore labels are made to repeatedly switch on and off ("blink"), and their exact locations are determined by mathematically finding the centers of individual blinks. The image quality obtainable by SMLM critically depends on efficacy of blinking (brightness, fraction of molecules in the on-state) and on preparation longevity and labeling density. Recent work has identified several combinations of bright dyes and imaging buffers that work well together. Unfortunately, different dyes blink optimally in different imaging buffers, and acquisition of good quality 2- and 3-color images has therefore remained challenging. In this study we describe a new imaging buffer, OxEA, that supports 3-color imaging of the popular Alexa dyes. We also describe incremental improvements in preparation technique that significantly decrease lateral- and axial drift, as well as increase preparation longevity. We show that these improvements allow us to collect very large series of images from the same cell, enabling image stitching, extended 3D imaging as well as multi-color recording. PMID:27391487

  7. Evaluation of Color Settings in Aerial Images with the Use of Eye-Tracking User Study

    NASA Astrophysics Data System (ADS)

    Mirijovsky, J.; Popelka, S.

    2016-06-01

    The main aim of presented paper is to find the most realistic and preferred color settings for four different types of surfaces on the aerial images. This will be achieved through user study with the use of eye-movement recording. Aerial images taken by the unmanned aerial system were used as stimuli. From each image, squared crop area containing one of the studied types of surfaces (asphalt, concrete, water, soil, and grass) was selected. For each type of surface, the real value of reflectance was found with the use of precise spectroradiometer ASD HandHeld 2 which measures the reflectance. The device was used at the same time as aerial images were captured, so lighting conditions and state of vegetation were equal. The spectral resolution of the ASD device is better than 3.0 nm. For defining the RGB values of selected type of surface, the spectral reflectance values recorded by the device were merged into wider groups. Finally, we get three groups corresponding to RGB color system. Captured images were edited with the graphic editor Photoshop CS6. Contrast, clarity, and brightness were edited for all surface types on images. Finally, we get a set of 12 images of the same area with different color settings. These images were put into the grid and used as stimuli for the eye-tracking experiment. Eye-tracking is one of the methods of usability studies and it is considered as relatively objective. Eye-tracker SMI RED 250 with the sampling frequency 250 Hz was used in the study. As respondents, a group of 24 students of Geoinformatics and Geography was used. Their task was to select which image in the grid has the best color settings. The next task was to select which color settings they prefer. Respondents' answers were evaluated and the most realistic and most preferable color settings were found. The advantage of the eye-tracking evaluation was that also the process of the selection of the answers was analyzed. Areas of Interest were marked around each image in the

  8. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  9. A Multi-Color Simultaneous Imager Instrument Concept for the IRTF

    NASA Astrophysics Data System (ADS)

    Connelley, Michael; Tokunaga, A.; Bus, S.

    2013-10-01

    We present a concept for a multi-channel imaging camera optimized for the rapid characterization of small planetary bodies. This instrument will be a seeing limited imager with a 2' field of view that will simultaneously observe in 8 color channels from Sloan g’ through K band. This very broad simultaneous wavelength coverage enables several key science goals, with a strong emphasis on time critical and variable observations. First among these is the taxonomic classification of solar system minor bodies, such as main belt and near-Earth asteroids, as well as trans-Neptunian objects. Asteroid taxonomy is key to understanding the history of the asteroid belt, characterizing the NEA population, and connecting the NEA population to its origins in the Main Belt. Giant planet monitoring will be made significantly more efficient with this instrument by doing simultaneously what observers now do in series. This instrument will be a powerful tool for the characterization of the atmospheres of transiting exo-planets by providing relative photometry in several optical and near-IR bands simultaneously. The multicolor imaging of this instrument will also have broad astrophysical applications. These include disentangling newly discovered brown dwarf candidates from quasars, monitoring color variability of young stars, and the rapid follow-up of gamma ray bursts. Although this instrument has the potential to be very powerful, it will also be very simple. Similar instruments use a separate detector for each channel requiring a ‘dichroic tree’. Although we will observe in 8 color channels simultaneously, this concept will only use two detectors. We will project four color channels onto each detector; 4 visible light images onto a CCD and 4 near-IR images onto an IR-array. Narrowband imaging is possible by placing a filter array in the color channels. Optically mapping multiple color channels onto a single detector reduces instrument size, cost and risk.

  10. High-definition color image in dye thermal transfer printing by laser heating

    NASA Astrophysics Data System (ADS)

    Kitamura, Takashi

    1999-12-01

    In laser thermal transfer printing using dye sublimation type medium, a high definition and continuous tone image can be obtained easily because the laser beam is focused to small spot and heat energy can be controlled by the pulse width modulation of laser light. The donor ink sheet is composed of the laser absorbing layer and sublimation dye layer. The tone reproduction was depend on the mixture ratio of dye to binder and thickness of ink layer. The four color ink sheets such as cyan, magenta, yellow and black were prepared for color printing image which have a high resolution and good continuous tone reproduction using sublimation dye transfer printing by laser heating.

  11. Hyperspectral image reconstruction using RGB color for foodborne pathogen detection on agar plates

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Park, Bosoon; Lawrence, Kurt C.; Heitschmidt, Gerald W.

    2014-03-01

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance spectra measured in the visible and near-infrared spectral range from 400 and 1,000 nm (473 narrow spectral bands). Multivariate regression methods were used to estimate and predict hyperspectral data from RGB color values. The six representative non-O157 Shiga-toxin producing Eschetichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) were grown on Rainbow agar plates. A line-scan pushbroom hyperspectral image sensor was used to scan 36 agar plates grown with pure STEC colonies at each plate. The 36 hyperspectral images of the agar plates were divided in half to create training and test sets. The mean Rsquared value for hyperspectral image estimation was about 0.98 in the spectral range between 400 and 700 nm for linear, quadratic and cubic polynomial regression models and the detection accuracy of the hyperspectral image classification model with the principal component analysis and k-nearest neighbors for the test set was up to 92% (99% with the original hyperspectral images). Thus, the results of the study suggested that color-based detection may be viable as a multispectral imaging solution without much loss of prediction accuracy compared to hyperspectral imaging.

  12. Natural-Color Image Mosaics of Afghanistan: Digital Databases and Maps

    USGS Publications Warehouse

    Davis, Philip A.; Hare, Trent M.

    2007-01-01

    Explanation: The 50 tiled images in this dataset are natural-color renditions of the calibrated six-band Landsat mosaics created from Landsat Enhanced Thematic Mapper Plus (ETM+) data. Natural-color images depict the surface as seen by the human eye. The calibration of the Landsat ETM+ maps produced by Davis (2006) are relative reflectance and need to be grounded with ground-reflectance data, but the difficulties in performing fieldwork in Afghanistan precluded ground-reflectance surveys. For natural color calibration, which involves only the blue, green, and red color bands of Landsat, we could use ground photographs, Munsell color readings of ground surfaces, or another image base that accurately depicts the surface color. Each map quadrangle is 1? of latitude by? of longitude. The numbers assigned to each map quadrangle refer to the latitude and longitude coordinates of the lower left corner of the quadrangle. For example, quadrangle Q2960 has its lower left corner at lat 29? N., long 60? E. Each quadrangle overlaps adjacent quadrangles by 100 pixels (2.85 km). Only the 14.25-m-spacial-resolution UTM and 28.5-m-spacial-resolution WGS84 geographic geotiff datasets are available in this report to decrease the amount of space needed. The images are (three-band, eight-bit) geotiffs with embedded georeferencing. As such, most software will not require the associated world files. An index of all available images in geographic is displayed here: Index_Geo_DD.pdf. The country of Afghanistan spans three UTM zones: (41-43). Maps are stored as geoTIFFs in their respective UTM zone projection. Indexes of all available topographic map sheets in their respective UTM zone are displayed here: Index_UTM_Z41.pdf, Index_UTM_Z42.pdf, Index_UTM_Z43.pdf. You will need Adobe Reader to view the PDF files. Download a copy of the latest version of Adobe Reader for free.

  13. Color-difference evaluation for digital images using a categorical judgment method.

    PubMed

    Liu, Haoxue; Huang, Min; Cui, Guihua; Luo, M Ronnier; Melgosa, Manuel

    2013-04-01

    The CIELAB lightness and chroma values of pixels in five of the eight ISO SCID natural images were modified to produce sample images. Pairs of images were displayed on a calibrated monitor and assessed by a panel of 12 observers with normal color vision using a categorical judgment method. The experimental results showed that assuming the lightness parametric factor k(L)=1 to predict color differences in images, CIELAB performed better than CIEDE2000, CIE94, or CMC, which is a different result to the one found in color-difference literature for homogeneous color pairs. However, observers perceived CIELAB lightness and chroma differences in images in different ways. To fit current experimental data, a specific methodology is proposed to optimize k(L) in the color-difference formulas CIELAB, CIEDE2000, CIE94, and CMC. From the standardized residual sum of squares (STRESS) index, it was found that the optimized formulas, CIEDE2000(2.3:1), CIE94(3.0:1), and CMC(3.4:1), performed significantly better than their corresponding original forms with lightness parametric factor k(L)=1. Specifically, CIEDE2000(2.3:1) performed the best, with a satisfactory average STRESS value of 25.8, which is very similar to the 27.5 value that was found from the CIEDE2000(1:1) formula for the combined weighted dataset of homogeneous color samples employed at the development of this formula [J. Opt. Soc. Am. A25, 1828 (2008), Table 2]. However, fitting our experimental data, none of the four optimized formulas CIELAB(1.5:1), CIEDE2000(2.3:1), CIE94(3.0:1), and CMC(3.4:1) is significantly better than the others. Current results roughly agree with the recent CIE recommendation that color difference in images can be predicted by simply adopting a lightness parametric factor k(L)=2 in CIELAB or CIEDE2000 [CIE Publication 199:2011]. It was also found that the different contents of the five images have considerable influence on the performance of the tested color-difference formulas. PMID:23595320

  14. Novel Method for Border Irregularity Assessment in Dermoscopic Color Images

    PubMed Central

    Jaworek-Korjakowska, Joanna

    2015-01-01

    Background. One of the most important lesion features predicting malignancy is border irregularity. Accurate assessment of irregular borders is clinically important due to significantly different occurrence in benign and malignant skin lesions. Method. In this research, we present a new approach for the detection of border irregularities, as one of the major parameters in a widely used diagnostic algorithm the ABCD rule of dermoscopy. The proposed work is focused on designing an efficient automatic algorithm containing the following steps: image enhancement, lesion segmentation, borderline calculation, and irregularities detection. The challenge lies in determining the exact borderline. For solving this problem we have implemented a new method based on lesion rotation and borderline division. Results. The algorithm has been tested on 350 dermoscopic images and achieved accuracy of 92% indicating that the proposed computational approach captured most of the irregularities and provides reliable information for effective skin mole examination. Compared to the state of the art, we obtained improved classification results. Conclusions. The current study suggests that computer-aided system is a practical tool for dermoscopic image assessment and could be recommended for both research and clinical applications. The proposed algorithm can be applied in different fields of medical image analysis including, for example, CT and MRI images. PMID:26604980

  15. The precise prediction model of spectral reflectance for color halftone images

    NASA Astrophysics Data System (ADS)

    Tian, Dongwen; Tian, Fengwen

    2015-01-01

    In order to predict the spectral reflectance of color halftone images, we considered the scattering of light within paper and the ink penetration in the substrate and proposed the color spectral reflectance precise prediction model for halftone images. The paper based on the assumption that the colorant is non-scattering and the assumption that the paper is strong scattering substrate. By the multiple internal reflection between the paper substrate and the print-air interface of light, and the light along oblique path of the Williams-Clapper model, we propose this model for taking into account ink spreading, a phenomenon that occurs when printing an ink halftone in superposition with one or several solid inks. The ink-spreading model includes nominal-to-effective dot area coverage functions for each of the different ink overprint conditions by the least square curve fitting method and the network structure of multiple reflection. It turned out that the modeled and the measured colors agree very well, confirming the validity of the used model. The new model provides a theoretical foundation for color prediction analysis of halftone images and the development of prints quality detection system.

  16. Color-direction patch-sparsity-based image inpainting using multidirection features.

    PubMed

    Li, Zhidan; He, Hongjie; Tai, Heng-Ming; Yin, Zhongke; Chen, Fan

    2015-03-01

    This paper proposes a color-direction patch sparsity-based image in painting method to better maintain structure coherence, texture clarity, and neighborhood consistence of the in painted region of an image. The method uses super-wavelet transform to estimate the multi-direction features of a degraded image, and combines with color information to construct the weighted color-direction distance (WCDD) to measure the difference between two patches. Based on the WCDD, the color-direction structure sparsity is defined to obtain a more robust filling order and more suitable multiple candidate patches are searched. Then, the target patches are sparsely represented by the multiple candidate patches under neighborhood consistency constraints in both the color and the multi-direction spaces. Experimental results are presented to demonstrate the effectiveness of the proposed approach on tasks such as scratch removal, text removal, block removal, and object removal. The effects of super-wavelet transforms and direction features are also investigated. PMID:25532180

  17. Motion robust PPG-imaging through color channel mapping.

    PubMed

    Moço, Andreia V; Stuijk, Sander; de Haan, Gerard

    2016-05-01

    Photoplethysmography (PPG)-imaging is an emerging noninvasive technique that maps spatial blood-volume variations in living tissue with a video camera. In this paper, we clarify how cardiac-related (i.e., ballistocardiographic; BCG) artifacts occur in this imaging modality and address these using algorithms from the remote-PPG literature. Performance is assessed under stationary conditions at the immobilized hand. Our proposal outperforms the state-of-the-art, blood pulsation imaging [Biomed. Opt. Express5, 3123 (2014). ], even in our best attempt to create diffused illumination. BCG-artifacts are suppressed to an order of magnitude below PPG-signal strength, which is sufficient to prevent interpretation errors. PMID:27231618

  18. Motion robust PPG-imaging through color channel mapping

    PubMed Central

    Moço, Andreia V.; Stuijk, Sander; de Haan, Gerard

    2016-01-01

    Photoplethysmography (PPG)-imaging is an emerging noninvasive technique that maps spatial blood-volume variations in living tissue with a video camera. In this paper, we clarify how cardiac-related (i.e., ballistocardiographic; BCG) artifacts occur in this imaging modality and address these using algorithms from the remote-PPG literature. Performance is assessed under stationary conditions at the immobilized hand. Our proposal outperforms the state-of-the-art, blood pulsation imaging [Biomed. Opt. Express 5, 3123 (2014)25401026. ], even in our best attempt to create diffused illumination. BCG-artifacts are suppressed to an order of magnitude below PPG-signal strength, which is sufficient to prevent interpretation errors. PMID:27231618

  19. QBIC project: querying images by content, using color, texture, and shape

    NASA Astrophysics Data System (ADS)

    Niblack, Carlton W.; Barber, Ron; Equitz, Will; Flickner, Myron D.; Glasman, Eduardo H.; Petkovic, Dragutin; Yanker, Peter; Faloutsos, Christos; Taubin, Gabriel

    1993-04-01

    In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.

  20. Toward a No-Reference Image Quality Assessment Using Statistics of Perceptual Color Descriptors.

    PubMed

    Lee, Dohyoung; Plataniotis, Konstantinos N

    2016-08-01

    Analysis of the statistical properties of natural images has played a vital role in the design of no-reference (NR) image quality assessment (IQA) techniques. In this paper, we propose parametric models describing the general characteristics of chromatic data in natural images. They provide informative cues for quantifying visual discomfort caused by the presence of chromatic image distortions. The established models capture the correlation of chromatic data between spatially adjacent pixels by means of color invariance descriptors. The use of color invariance descriptors is inspired by their relevance to visual perception, since they provide less sensitive descriptions of image scenes against viewing geometry and illumination variations than luminances. In order to approximate the visual quality perception of chromatic distortions, we devise four parametric models derived from invariance descriptors representing independent aspects of color perception: 1) hue; 2) saturation; 3) opponent angle; and 4) spherical angle. The practical utility of the proposed models is examined by deploying them in our new general-purpose NR IQA metric. The metric initially estimates the parameters of the proposed chromatic models from an input image to constitute a collection of quality-aware features (QAF). Thereafter, a machine learning technique is applied to predict visual quality given a set of extracted QAFs. Experimentation performed on large-scale image databases demonstrates that the proposed metric correlates well with the provided subjective ratings of image quality over commonly encountered achromatic and chromatic distortions, indicating that it can be deployed on a wide variety of color image processing problems as a generalized IQA solution. PMID:27305678

  1. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  2. Pixel-wise orthogonal decomposition for color illumination invariant and shadow-free image.

    PubMed

    Qu, Liangqiong; Tian, Jiandong; Han, Zhi; Tang, Yandong

    2015-02-01

    In this paper, we propose a novel, effective and fast method to obtain a color illumination invariant and shadow-free image from a single outdoor image. Different from state-of-the-art methods for shadow-free image that either need shadow detection or statistical learning, we set up a linear equation set for each pixel value vector based on physically-based shadow invariants, deduce a pixel-wise orthogonal decomposition for its solutions, and then get an illumination invariant vector for each pixel value vector on an image. The illumination invariant vector is the unique particular solution of the linear equation set, which is orthogonal to its free solutions. With this illumination invariant vector and Lab color space, we propose an algorithm to generate a shadow-free image which well preserves the texture and color information of the original image. A series of experiments on a diverse set of outdoor images and the comparisons with the state-of-the-art methods validate our method. PMID:25836092

  3. Charon's Color: A view from New Horizon Ralph/Multispectral Visible Imaging Camera

    NASA Astrophysics Data System (ADS)

    Olkin, C.; Howett, C.; Grundy, W. M.; Parker, A. H.; Ennico Smith, K.; Stern, S. A.; Binzel, R. P.; Cook, J. C.; Cruikshank, D. P.; Dalle Ore, C.; Earle, A. M.; Jennings, D. E.; Linscott, I.; Lunsford, A.; Parker, J. W.; Protopapa, S.; Reuter, D.; Singer, K. N.; Spencer, J. R.; Tsang, C.; Verbiscer, A.; Weaver, H. A., Jr.; Young, L. A.

    2015-12-01

    The Multispectral Visible Imaging Camera (MVIC; Reuter et al., 2008) is part of Ralph, an instrument on NASA's New Horizons spacecraft. MVIC is the color 'eyes' of New Horizons, observing objects using five bands from blue to infrared wavelengths. MVIC's images of Charon show it to be an intriguing place, a far cry from the grey heavily cratered world once postulated. Rather Charon is observed to have large surface areas free of craters, and a northern polar region that is much redder than its surroundings. This talk will describe these initial results in more detail, along with Charon's global geological color variations to put these results into their wider context. Finally possible surface coloration mechanisms due to global processes and/or seasonal cycles will be discussed.

  4. Near-infrared imaging of Comet Halley: Discovery of a color gradient in the inner coma

    NASA Technical Reports Server (NTRS)

    Rieke, M. J.; Campins, H.

    1987-01-01

    Near-infrared images of Comet Halley were obtained in the standard J, H, and K bandpasses, on 3.5 Nov. 1985 with an HgCdTe camera at a 1.54 m telescope. Each image covers 38.4 arcsec on the side. A well defined gradient in the J-H and H-K colors within 5000 km of the nucleus is discovered with the bluest colors at the photocenter. Surface brightness profiles steeper than the canonical 1/rho are observed in the same region. Analysis indicates that the color gradient and the brightness profiles can both be explained by the presence of volatile (dirty ice) grains in the inner coma. An outburst of very small (Rayleigh scattering) dust particles could also account for the observations, however, this model is not supported by the spacecraft measurements. No obvious jets or other structures are observed.

  5. Unsupervised color normalisation for H and E stained histopathology image analysis

    NASA Astrophysics Data System (ADS)

    Celis, Raúl; Romero, Eduardo

    2015-12-01

    In histology, each dye component attempts to specifically characterise different microscopic structures. In the case of the Hematoxylin-Eosin (H&E) stain, universally used for routine examination, quantitative analysis may often require the inspection of different morphological signatures related mainly to nuclei patterns, but also to stroma distribution. Nevertheless, computer systems for automatic diagnosis are often fraught by color variations ranging from the capturing device to the laboratory specific staining protocol and stains. This paper presents a novel colour normalisation method for H&E stained histopathology images. This method is based upon the opponent process theory and blindly estimates the best color basis for the Hematoxylin and Eosin stains without relying on prior knowledge. Stain Normalisation and Color Separation are transversal to any Framework of Histopathology Image Analysis.

  6. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides

    PubMed Central

    Zarella, Mark D.; Breen, David E.; Plagov, Andrei; Garcia, Fernando U.

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing. PMID

  7. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides.

    PubMed

    Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structur