High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
NASA Astrophysics Data System (ADS)
Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao
2018-01-01
For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.
Merkel, Daniel; Brinkmann, Eckard; Kämmer, Joerg C; Köhler, Miriam; Wiens, Daniel; Derwahl, Karl-Michael
2015-09-01
The electronic colorization of grayscale B-mode sonograms using various color schemes aims to enhance the adaptability and practicability of B-mode sonography in daylight conditions. The purpose of this study was to determine the diagnostic effectiveness and importance of colorized B-mode sonography. Fifty-three video sequences of sonographic examinations of the liver were digitized and subsequently colorized in 2 different color combinations (yellow-brown and blue-white). The set of 53 images consisted of 33 with isoechoic masses, 8 with obvious lesions of the liver (hypoechoic or hyperechoic), and 12 with inconspicuous reference images of the liver. The video sequences were combined in a random order and edited into half-hour video clips. Isoechoic liver lesions were successfully detected in 58% of the yellow-brown video sequences and in 57% of the grayscale video sequences (P = .74, not significant). Fifty percent of the isoechoic liver lesions were successfully detected in the blue-white video sequences, as opposed to a 55% detection rate in the corresponding grayscale video sequences (P= .11, not significant). In 2 subgroups, significantly more liver lesions were detected with grayscale sonography compared to blue-white sonography. Yellow-brown-colorized B-mode sonography appears to be similarly effective for detection of isoechoic parenchymal liver lesions as traditional grayscale sonography. Blue-white colorization in B-mode sonography is probably not as effective as grayscale sonography, although a statistically significant disadvantage was shown only in the subgroup of hyperechoic liver lesions. © 2015 by the American Institute of Ultrasound in Medicine.
2017-09-28
This sequence of color-enhanced images shows how quickly the viewing geometry changes for NASA's Juno spacecraft as it swoops by Jupiter. The images were obtained by JunoCam. Once every 53 days, Juno swings close to Jupiter, speeding over its clouds. In just two hours, the spacecraft travels from a perch over Jupiter's north pole through its closest approach (perijove), then passes over the south pole on its way back out. This sequence shows 11 color-enhanced images from Perijove 8 (Sept. 1, 2017) with the south pole on the left (11th image in the sequence) and the north pole on the right (first image in the sequence). The first image on the right shows a half-lit globe of Jupiter, with the north pole approximately at the upper center of the image close to the terminator -- the dividing line between night and day. As the spacecraft gets closer to Jupiter, the horizon moves in and the range of visible latitudes shrinks. The second and third images in this sequence show the north polar region rotating away from the spacecraft's field of view while the first of Jupiter's lighter-colored bands comes into view. The fourth through the eighth images display a blue-colored vortex in the mid-southern latitudes near Points of Interest "Collision of Colours," "Sharp Edge," "Caltech, by Halka," and "Structure01." The Points of Interest are locations in Jupiter's atmosphere that were identified and named by members of the general public. Additionally, a darker, dynamic band can be seen just south of the vortex. In the ninth and tenth images, the south polar region rotates into view. The final image on the left displays Jupiter's south pole in the center. From the start of this sequence of images to the end, roughly 1 hour and 35 minutes elapsed. https://photojournal.jpl.nasa.gov/catalog/PIA21967
Edge enhancement of color images using a digital micromirror device.
Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A
2012-06-01
A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.
NASA Astrophysics Data System (ADS)
Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong
2016-12-01
To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.
Data Images and Other Graphical Displays for Directional Data
NASA Technical Reports Server (NTRS)
Morphet, Bill; Symanzik, Juergen
2005-01-01
Vectors, axes, and periodic phenomena have direction. Directional variation can be expressed as points on a unit circle and is the subject of circular statistics, a relatively new application of statistics. An overview of existing methods for the display of directional data is given. The data image for linear variables is reviewed, then extended to directional variables by displaying direction using a color scale composed of a sequence of four or more color gradients with continuity between sequences and ordered intuitively in a color wheel such that the color of the 0deg angle is the same as the color of the 360deg angle. Cross over, which arose in automating the summarization of historical wind data, and color discontinuity resulting from the use a single color gradient in computational fluid dynamics visualization are eliminated. The new method provides for simultaneous resolution of detail on a small scale and overall structure on a large scale. Example circular data images are given of a global view of average wind direction of El Nino periods, computed rocket motor internal combustion flow, a global view of direction of the horizontal component of earth's main magnetic field on 9/15/2004, and Space Shuttle solid rocket motor nozzle vectoring.
Image enhancement and color constancy for a vehicle-mounted change detection system
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Monnin, David
2016-10-01
Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.
Stereo sequence transmission via conventional transmission channel
NASA Astrophysics Data System (ADS)
Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho
2003-05-01
This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Temporal enhancement of two-dimensional color doppler echocardiography
NASA Astrophysics Data System (ADS)
Terentjev, Alexey B.; Settlemier, Scott H.; Perrin, Douglas P.; del Nido, Pedro J.; Shturts, Igor V.; Vasilyev, Nikolay V.
2016-03-01
Two-dimensional color Doppler echocardiography is widely used for assessing blood flow inside the heart and blood vessels. Currently, frame acquisition time for this method varies from tens to hundreds of milliseconds, depending on Doppler sector parameters. This leads to low frame rates of resulting video sequences equal to tens of Hz, which is insufficient for some diagnostic purposes, especially in pediatrics. In this paper, we present a new approach for reconstruction of 2D color Doppler cardiac images, which results in the frame rate being increased to hundreds of Hz. This approach relies on a modified method of frame reordering originally applied to real-time 3D echocardiography. There are no previous publications describing application of this method to 2D Color Doppler data. The approach has been tested on several in-vivo cardiac 2D color Doppler datasets with approximate duration of 30 sec and native frame rate of 15 Hz. The resulting image sequences had equivalent frame rates to 500Hz.
Video enhancement method with color-protection post-processing
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Kwak, Youngshin
2015-01-01
The current study is aimed to propose a post-processing method for video enhancement by adopting a color-protection technique. The color-protection intends to attenuate perceptible artifacts due to over-enhancements in visually sensitive image regions such as low-chroma colors, including skin and gray objects. In addition, reducing the loss in color texture caused by the out-of-color-gamut signals is also taken into account. Consequently, color reproducibility of video sequences could be remarkably enhanced while the undesirable visual exaggerations are minimized.
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
The Faintest WISE Debris Disks: Enhanced Methods for Detection and Verification
NASA Astrophysics Data System (ADS)
Patel, Rahul I.; Metchev, Stanimir A.; Heinze, Aren; Trollo, Joseph
2017-02-01
In an earlier study, we reported nearly 100 previously unknown dusty debris disks around Hipparcos main-sequence stars within 75 pc by selecting stars with excesses in individual WISE colors. Here, we further scrutinize the Hipparcos 75 pc sample to (1) gain sensitivity to previously undetected, fainter mid-IR excesses and (2) remove spurious excesses contaminated by previously unidentified blended sources. We improve on our previous method by adopting a more accurate measure of the confidence threshold for excess detection and by adding an optimally weighted color average that incorporates all shorter-wavelength WISE photometry, rather than using only individual WISE colors. The latter is equivalent to spectral energy distribution fitting, but only over WISE bandpasses. In addition, we leverage the higher-resolution WISE images available through the unWISE.me image service to identify contaminated WISE excesses based on photocenter offsets among the W3- and W4-band images. Altogether, we identify 19 previously unreported candidate debris disks. Combined with the results from our earlier study, we have found a total of 107 new debris disks around 75 pc Hipparcos main-sequence stars using precisely calibrated WISE photometry. This expands the 75 pc debris disk sample by 22% around Hipparcos main-sequence stars and by 20% overall (including non-main-sequence and non-Hipparcos stars).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Rahul I.; Metchev, Stanimir A.; Trollo, Joseph
In an earlier study, we reported nearly 100 previously unknown dusty debris disks around Hipparcos main-sequence stars within 75 pc by selecting stars with excesses in individual WISE colors. Here, we further scrutinize the Hipparcos 75 pc sample to (1) gain sensitivity to previously undetected, fainter mid-IR excesses and (2) remove spurious excesses contaminated by previously unidentified blended sources. We improve on our previous method by adopting a more accurate measure of the confidence threshold for excess detection and by adding an optimally weighted color average that incorporates all shorter-wavelength WISE photometry, rather than using only individual WISE colors. Themore » latter is equivalent to spectral energy distribution fitting, but only over WISE bandpasses. In addition, we leverage the higher-resolution WISE images available through the unWISE.me image service to identify contaminated WISE excesses based on photocenter offsets among the W 3- and W 4-band images. Altogether, we identify 19 previously unreported candidate debris disks. Combined with the results from our earlier study, we have found a total of 107 new debris disks around 75 pc Hipparcos main-sequence stars using precisely calibrated WISE photometry. This expands the 75 pc debris disk sample by 22% around Hipparcos main-sequence stars and by 20% overall (including non-main-sequence and non- Hipparcos stars).« less
Fire detection system using random forest classification for image sequences of complex background
NASA Astrophysics Data System (ADS)
Kim, Onecue; Kang, Dong-Joong
2013-06-01
We present a fire alarm system based on image processing that detects fire accidents in various environments. To reduce false alarms that frequently appeared in earlier systems, we combined image features including color, motion, and blinking information. We specifically define the color conditions of fires in hue, saturation and value, and RGB color space. Fire features are represented as intensity variation, color mean and variance, motion, and image differences. Moreover, blinking fire features are modeled by using crossing patches. We propose an algorithm that classifies patches into fire or nonfire areas by using random forest supervised learning. We design an embedded surveillance device made with acrylonitrile butadiene styrene housing for stable fire detection in outdoor environments. The experimental results show that our algorithm works robustly in complex environments and is able to detect fires in real time.
Mars-Flyby Comet in False Color
2014-11-07
This frame from a movie sequence of images from NASA Mars Reconnaissance Orbiter MRO shows comet C/2013 A1 Siding Spring before and after its close pass by Mars in October 2014. False color enhances subtle variations in brightness in the comet coma.
Martian soil stratigraphy and rock coatings observed in color-enhanced Viking Lander images
NASA Technical Reports Server (NTRS)
Strickland, E. L., III
1979-01-01
Subtle color variations of martian surface materials were enhanced in eight Viking Lander (VL) color images. Well-defined soil units recognized at each site (six at VL-1 and four at VL-2), are identified on the basis of color, texture, morphology, and contact relations. The soil units at the Viking 2 site form a well-defined stratigraphic sequence, whereas the sequence at the Viking 1 site is only partially defined. The same relative soil colors occur at the two sites, suggesting that similar soil units are widespread on Mars. Several types of rock surface materials can be recognized at the two sites; dark, relatively 'blue' rock surfaces are probably minimally weathered igneous rock, whereas bright rock surfaces, with a green/(blue + red) ratio higher than that of any other surface material, are interpreted as a weathering product formed in situ on the rock. These rock surface types are common at both sites. Soil adhering to rocks is common at VL-2, but rare at VL-1. The mechanism that produces the weathering coating on rocks probably operates planet-wide.
Malware analysis using visualized image matrices.
Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.
Sunset Sequence in Mars Gale Crater Animation
2015-05-08
NASA's Curiosity Mars rover recorded this sequence of views of the sun setting at the close of the mission's 956th Martian day, or sol (April 15, 2015), from the rover's location in Gale Crater. The four images shown in sequence here were taken over a span of 6 minutes, 51 seconds. This was the first sunset observed in color by Curiosity. The images come from the left-eye camera of the rover's Mast Camera (Mastcam). The color has been calibrated and white-balanced to remove camera artifacts. Mastcam sees color very similarly to what human eyes see, although it is actually a little less sensitive to blue than people are. Dust in the Martian atmosphere has fine particles that permit blue light to penetrate the atmosphere more efficiently than longer-wavelength colors. That causes the blue colors in the mixed light coming from the sun to stay closer to sun's part of the sky, compared to the wider scattering of yellow and red colors. The effect is most pronounced near sunset, when light from the sun passes through a longer path in the atmosphere than it does at mid-day. Malin Space Science Systems, San Diego, built and operates the rover's Mastcam. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Science Laboratory Project for NASA's Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19401
2017-05-25
This sequence of enhanced-color images shows how quickly the viewing geometry changes for NASA's Juno spacecraft as it swoops by Jupiter. The images were obtained by JunoCam. Once every 53 days the Juno spacecraft swings close to Jupiter, speeding over its clouds. In just two hours, the spacecraft travels from a perch over Jupiter's north pole through its closest approach (perijove), then passes over the south pole on its way back out. This sequence shows 14 enhanced-color images. The first image on the left shows the entire half-lit globe of Jupiter, with the north pole approximately in the center. As the spacecraft gets closer to Jupiter, the horizon moves in and the range of visible latitudes shrinks. The third and fourth images in this sequence show the north polar region rotating away from our view while a band of wavy clouds at northern mid-latitudes comes into view. By the fifth image of the sequence the band of turbulent clouds is nicely centered in the image. The seventh and eighth images were taken just before the spacecraft was at its closest point to Jupiter, near Jupiter's equator. Even though these two pictures were taken just four minutes apart, the view is changing quickly. As the spacecraft crossed into the southern hemisphere, the bright "south tropical zone" dominates the ninth, 10th and 11th images. The white ovals in a feature nicknamed Jupiter's "String of Pearls" are visible in the 12th and 13th images. In the 14th image Juno views Jupiter's south poles. https://photojournal.jpl.nasa.gov/catalog/PIA21645
Diffusion Tensor Magnetic Resonance Imaging Strategies for Color Mapping of Human Brain Anatomy
Boujraf, Saïd
2018-01-01
Background: A color mapping of fiber tract orientation using diffusion tensor imaging (DTI) can be prominent in clinical practice. The goal of this paper is to perform a comparative study of visualized diffusion anisotropy in the human brain anatomical entities using three different color-mapping techniques based on diffusion-weighted imaging (DWI) and DTI. Methods: The first technique is based on calculating a color map from DWIs measured in three perpendicular directions. The second technique is based on eigenvalues derived from the diffusion tensor. The last technique is based on three eigenvectors corresponding to sorted eigenvalues derived from the diffusion tensor. All magnetic resonance imaging measurements were achieved using a 1.5 Tesla Siemens Vision whole body imaging system. A single-shot DW echoplanar imaging sequence used a Stejskal–Tanner approach. Trapezoidal diffusion gradients are used. The slice orientation was transverse. The basic measurement yielded a set of 13 images. Each series consists of a single image without diffusion weighting, besides two DWIs for each of the next six noncollinear magnetic field gradient directions. Results: The three types of color maps were calculated consequently using the DWI obtained and the DTI. Indeed, we established an excellent similarity between the image data in the color maps and the fiber directions of known anatomical structures (e.g., corpus callosum and gray matter). Conclusions: In the meantime, rotationally invariant quantities such as the eigenvectors of the diffusion tensor reflected better, the real orientation found in the studied tissue. PMID:29928631
Calibrated Color and Albedo Maps of Mercury
NASA Astrophysics Data System (ADS)
Robinson, M. S.; Lucey, P. G.
1996-03-01
In order to determine the albedo and color of the mercurian surface, we are completing calibrated mosaics of Mariner 10 image data. A set of clear filter mosaics is being compiled in such a way as to maximize the signal-to-noise-ratio of the data and to allow for a quantitative measure of the precision of the data on a pixel-by-pixel basis. Three major imaging sequences of Mercury were acquired by Mariner 10: incoming first encounter (centered at 20S, 2E), outgoing first encounter (centered at 20N, 175E), and southern hemisphere second encounter (centered at 40S, 100E). For each sequence we are making separate mosaics for each camera (A and B) in order to have independent measurements. For each mosaic, regions of overlap from frame-to-frame are being averaged and the attendant standard deviations are being calculated. Due to the highly redundant nature of the data, each pixel in each mosaic will be an average calculated from 1-10 images. Each mosaic will have a corresponding standard deviation and n (number of measurements) map. A final mosaic will be created by averaging the six independent mosaics. This procedure lessens the effects of random noise and calibration residuals. From these data an albedo map will be produced using an improved photometric function for the Moon. A similar procedure is being followed for the lower resolution color sequences (ultraviolet, blue, orange, ultraviolet polarized). These data will be calibrated to absolute units through comparison of Mariner 10 images acquired of the Moon and Jupiter. Spectral interpretation of these new color and albedo maps will be presented with an emphasis on comparison with the Moon.
Malware Analysis Using Visualized Image Matrices
Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202
NASA Astrophysics Data System (ADS)
Falcoff, Daniel E.; Canali, Luis R.
1999-08-01
This work present one method aimed to individualization and recognition of vial signs in route and city. It is based fundamentally on the identification by means of color and form of the vial sing, located in the border of the route or street in city, and then recognition. To do so the obtained RGB image is processed, carrying out diverse filtrates in the sequence of input image, or intensifying the colors of the same ones otherwise, recognizing their silhouette and then segmenting the sign and comparing the symbology of them with the previously stored and classified database.
Hogervorst, Maarten A.; Pinkus, Alan R.
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328
Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R
2016-01-01
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.
NASA Astrophysics Data System (ADS)
Sajjadi, Seyed; Buelna, Xavier; Eloranta, Jussi
2018-01-01
Application of inexpensive light emitting diodes as backlight sources for time-resolved shadowgraph imaging is demonstrated. The two light sources tested are able to produce light pulse sequences in the nanosecond and microsecond time regimes. After determining their time response characteristics, the diodes were applied to study the gas bubble formation around laser-heated copper nanoparticles in superfluid helium at 1.7 K and to determine the local cavitation bubble dynamics around fast moving metal micro-particles in the liquid. A convolutional neural network algorithm for analyzing the shadowgraph images by a computer is presented and the method is validated against the results from manual image analysis. The second application employed the red-green-blue light emitting diode source that produces light pulse sequences of the individual colors such that three separate shadowgraph frames can be recorded onto the color pixels of a charge-coupled device camera. Such an image sequence can be used to determine the moving object geometry, local velocity, and acceleration/deceleration. These data can be used to calculate, for example, the instantaneous Reynolds number for the liquid flow around the particle. Although specifically demonstrated for superfluid helium, the technique can be used to study the dynamic response of any medium that exhibits spatial variations in the index of refraction.
From printed color to image appearance: tool for advertising assessment
NASA Astrophysics Data System (ADS)
Bonanomi, Cristian; Marini, Daniele; Rizzi, Alessandro
2012-07-01
We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.
Creating photorealistic virtual model with polarization-based vision system
NASA Astrophysics Data System (ADS)
Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi
2005-08-01
Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
ACS Imaging of beta Pic: Searching for the origin of rings and asymmetry in planetesimal disks
NASA Astrophysics Data System (ADS)
Kalas, Paul
2003-07-01
The emerging picture for planetesimal disks around main sequence stars is that their radial and azimuthal symmetries are significantly deformed by the dynamical effects of either planets interior to the disk, or stellar objects exterior to the disk. The cause of these structures, such as the 50 AU cutoff of our Kuiper Belt, remains mysterious. Structure in the beta Pic planetesimal disk could be due to dynamics controlled by an extrasolar planet, or by the tidal influence of a more massive object exterior to the disk. The hypothesis of an extrasolar planet causing the vertical deformation in the disk predicts a blue color to the disk perpendicular to the disk midplane. The hypothesis that a stellar perturber deforms the disk predicts a globally uniform color and the existence of ring-like structure beyond 800 AU radius. We propose to obtain deep, multi-color images of the beta Pic disk ansae in the region 15"-220" {200-4000 AU} radius with the ACS WFC. The unparalleled stability of the HST PSF means that these data are uniquely capable of delivering the color sensitivity that can distinguish between the two theories of beta Pic's disk structure. Ascertaining the cause of such structure provide a meaningful context for understanding the dynamical history of our early solar system, as well as other planetesimal systems imaged around main sequence stars.
Color-coded visualization of magnetic resonance imaging multiparametric maps
NASA Astrophysics Data System (ADS)
Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit
2017-01-01
Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.
X-ray and Optical Observations of NGC 1788
NASA Astrophysics Data System (ADS)
Alcalá, J. M.; Covino, E.; Wachter, S.; Hoard, D. W.; Sterzik, M. F.; Durisen, R. H.; Freyberg, M.; Cooksey, K.
We report on the results of ROSAT High Resolution Imager (HRI) X-ray observations and optical wide-field spectroscopy and imaging in the star forming region NGC 1788. Several new low mass pre-main sequence (PMS) stars have been found based on intermediate resolution spectroscopy. Many new PMS candidate members of NGC 1788 are selected using the spectroscopically confirmed PMS stars to define the PMS locus in color-magnitude diagrams. Some objects with very red colors detected just above the limiting magnitude of our images, are good candidates for young Brown Dwarfs (BDs). The BD nature of these objects need to be confirmed with subsequent IR observations.
In situ spectroradiometric quantification of ERTS data
NASA Technical Reports Server (NTRS)
Yost, E. (Principal Investigator)
1972-01-01
The author has identified the following significant results. Additive color photographic analysis of ERTS-1 multispectral imagery indicates that the presence of soil moisture in playas (desert dry lakes) can be readily detected from space. Time sequence additive color presentations in which 600-700 nm bands taken at three successive 18-day cycles show that changes in soil moisture of playas with time can be detected as unique color signatures and can probably be quantitatively measured using photographic images of multispectral scanner data.
NASA Astrophysics Data System (ADS)
Rodrigo, Ranga P.; Ranaweera, Kamal; Samarabandu, Jagath K.
2004-05-01
Focus of attention is often attributed to biological vision system where the entire field of view is first monitored and then the attention is focused to the object of interest. We propose using a similar approach for object recognition in a color image sequence. The intention is to locate an object based on a prior motive, concentrate on the detected object so that the imaging device can be guided toward it. We use the abilities of the intelligent image analysis framework developed in our laboratory to generate an algorithm dynamically to detect the particular type of object based on the user's object description. The proposed method uses color clustering along with segmentation. The segmented image with labeled regions is used to calculate the shape descriptor parameters. These and the color information are matched with the input description. Gaze is then controlled by issuing camera movement commands as appropriate. We present some preliminary results that demonstrate the success of this approach.
VizieR Online Data Catalog: NGC 7129 pre-main sequence stars (Stelzer+, 2009)
NASA Astrophysics Data System (ADS)
Stelzer, B.; Scholz, A.
2010-09-01
We make use of X-ray and IR imaging observations to identify the pre-main sequence stars in NGC 7129. We define a sample of young stellar objects based on color-color diagrams composed from IR photometry between 1.6 and 8um, from 2MASS and Spitzer, and based on X-ray detected sources from a Chandra observation. A 22ks long Chandra observation targeting the Herbig star SVS 12 was carried out on Mar 11, 2006 (start of observation UT 14h29m18s). (5 data files).
Forming a Bose-Einstein Condensate
2014-09-26
This sequence of false-color images shows the formation of a Bose-Einstein condensate in the Cold Atom Laboratory prototype at NASA Jet Propulsion Laboratory as the temperature gets progressively closer to absolute zero.
Disentangling perceptual from motor implicit sequence learning with a serial color-matching task.
Gheysen, Freja; Gevers, Wim; De Schutter, Erik; Van Waelvelde, Hilde; Fias, Wim
2009-08-01
This paper contributes to the domain of implicit sequence learning by presenting a new version of the serial reaction time (SRT) task that allows unambiguously separating perceptual from motor learning. Participants matched the colors of three small squares with the color of a subsequently presented large target square. An identical sequential structure was tied to the colors of the target square (perceptual version, Experiment 1) or to the manual responses (motor version, Experiment 2). Short blocks of sequenced and randomized trials alternated and hence provided a continuous monitoring of the learning process. Reaction time measurements demonstrated clear evidence of independently learning perceptual and motor serial information, though revealed different time courses between both learning processes. No explicit awareness of the serial structure was needed for either of the two types of learning to occur. The paradigm introduced in this paper evidenced that perceptual learning can occur with SRT measurements and opens important perspectives for future imaging studies to answer the ongoing question, which brain areas are involved in the implicit learning of modality specific (motor vs. perceptual) or general serial order.
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Liu, Benqing; Wang, Qiang; Li, Ye; Liang, Junli
2015-12-01
A color image encryption scheme is proposed based on Yang-Gu mixture amplitude-phase retrieval algorithm and two-coupled logistic map in gyrator transform domain. First, the color plaintext image is decomposed into red, green and blue components, which are scrambled individually by three random sequences generated by using the two-dimensional Sine logistic modulation map. Second, each scrambled component is encrypted into a real-valued function with stationary white noise distribution in the iterative amplitude-phase retrieval process in the gyrator transform domain, and then three obtained functions are considered as red, green and blue channels to form the color ciphertext image. Obviously, the ciphertext image is real-valued function and more convenient for storing and transmitting. In the encryption and decryption processes, the chaotic random phase mask generated based on logistic map is employed as the phase key, which means that only the initial values are used as private key and the cryptosystem has high convenience on key management. Meanwhile, the security of the cryptosystem is enhanced greatly because of high sensitivity of the private keys. Simulation results are presented to prove the security and robustness of the proposed scheme.
Saturn's Hexagon as Summer Solstice Approaches
2017-05-24
These natural color views from NASA's Cassini spacecraft compare the appearance of Saturn's north-polar region in June 2013 and April 2017. In both views, Saturn's polar hexagon dominates the scene. The comparison shows how clearly the color of the region changed in the interval between the two views, which represents the latter half of Saturn's northern hemisphere spring. In 2013, the entire interior of the hexagon appeared blue. By 2017, most of the hexagon's interior was covered in yellowish haze, and only the center of the polar vortex retained the blue color. The seasonal arrival of the sun's ultraviolet light triggers the formation of photochemical aerosols, leading to haze formation. The general yellowing of the polar region is believed to be caused by smog particles produced by increasing solar radiation shining on the polar region as Saturn approached the northern summer solstice on May 24, 2017. Scientists are considering several ideas to explain why the center of the polar vortex remains blue while the rest of the polar region has turned yellow. One idea is that, because the atmosphere in the vortex's interior is the last place in the northern hemisphere to be exposed to spring and summer sunlight, smog particles have not yet changed the color of the region. A second explanation hypothesizes that the polar vortex may have an internal circulation similar to hurricanes on Earth. If the Saturnian polar vortex indeed has an analogous structure to terrestrial hurricanes, the circulation should be downward in the eye of the vortex. The downward circulation should keep the atmosphere clear of the photochemical smog particles, and may explain the blue color. Images captured with Cassini's wide-angle camera using red, green and blue spectral filters were combined to create these natural-color views. The 2013 view (left in the combined view), was captured on June 25, 2013, when the spacecraft was about 430,000 miles (700,000 kilometers) away from Saturn. The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels and an image scale of about 52 miles (80 kilometers) per pixel; the images have been mapped in polar stereographic projection to the resolution of approximately 16 miles (25 kilometers) per pixel. The second and third frames in the animation were taken approximately 130 and 260 minutes after the first image. The 2017 sequence (right in the combined view) was captured on April 25, 2017, just before Cassini made its first dive between Saturn and its rings. During the imaging sequence, the spacecraft's distance from the center of the planet changed from 450,000 miles (725,000 kilometers) to 143,000 miles (230,000 kilometers). The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels. The resolution of the original images changed from about 52 miles (80 kilometers) per pixel at the beginning to about 9 miles (14 kilometers) per pixel at the end. The images have been mapped in polar stereographic projection to the resolution of approximately 16 miles (25 kilometers) per pixel. The average interval between the frames in the movie sequence is 230 minutes. Corresponding animated movie sequences are available at https://photojournal.jpl.nasa.gov/catalog/PIA21611 https://photojournal.jpl.nasa.gov/catalog/PIA21611
The MESSENGER Earth Flyby: Results from the Mercury Dual Imaging System
NASA Astrophysics Data System (ADS)
Prockter, L. M.; Murchie, S. L.; Hawkins, S. E.; Robinson, M. S.; Shelton, R. G.; Vaughan, R. M.; Solomon, S. C.
2005-12-01
The MESSENGER (MErcury Surface, Space ENvironment, Geochemistry, and Ranging) spacecraft was launched from Cape Canaveral Air Force Station, Fla., on 3 August 2004. It returned to Earth for a gravity assist on 2 August 2005, providing an exceptional opportunity for the Science Team to perform instrument calibrations and to test some of the data acquisition sequences that will be used to meet Mercury science goals. The Mercury Dual Imaging System (MDIS), one of seven science instruments on MESSENGER, consists of a wide-angle and a narrow-angle imager that together can map landforms, track variations in surface color, and carry out stereogrammetry. The two imagers are mounted on a pivot platform that enables the instrument to point in a different direction from the spacecraft boresight, allowing great flexibility and increased imaging coverage. During the week prior to the closest approach to Earth, MDIS acquired a number of images of the Moon for radiometric calibration and to test optical navigation sequences that will be used to target planetary flybys. Twenty-four hours before closest approach, images of the Earth were acquired with 11 filters of the wide-angle camera. After MDIS flew over the nightside of the Earth, additional color images centered on South America were obtained at sufficiently high resolution to discriminate small-scale features such as the Amazon River and Lake Titicaca. During its departure from Earth, MDIS acquired a sequence of images taken in three filters every 4 minutes over a period of 24 hours. These images have been assembled into a movie of a crescent Earth that begins as South America slides across the terminator into darkness and continues for one full Earth rotation. This movie and the other images have provided a successful test of the sequences that will be used during the MESSENGER Mercury flybys in 2008 and 2009 and have demonstrated the high quality of the MDIS wide-angle camera.
Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect
NASA Astrophysics Data System (ADS)
Artyukhin, S. G.; Mestetskiy, L. M.
2015-05-01
This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.
Software for Analyzing Sequences of Flow-Related Images
NASA Technical Reports Server (NTRS)
Klimek, Robert; Wright, Ted
2004-01-01
Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.
Quantum color image watermarking based on Arnold transformation and LSB steganography
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng
In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.
Multi-pulse shadowgraphic RGB illumination and detection for flow tracking
NASA Astrophysics Data System (ADS)
Menser, Jan; Schneider, Florian; Dreier, Thomas; Kaiser, Sebastian A.
2018-06-01
This work demonstrates the application of a multi-color LED and a consumer color camera for visualizing phase boundaries in two-phase flows, in particular for particle tracking velocimetry. The LED emits a sequence of short light pulses, red, green, then blue (RGB), and through its color-filter array, the camera captures all three pulses on a single RGB frame. In a backlit configuration, liquid droplets appear as shadows in each color channel. Color reversal and color cross-talk correction yield a series of three frozen-flow images that can be used for further analysis, e.g., determining the droplet velocity by particle tracking. Three example flows are presented, solid particles suspended in water, the penetrating front of a gasoline direct-injection spray, and the liquid break-up region of an "air-assisted" nozzle. Because of the shadowgraphic arrangement, long path lengths through scattering media lower image contrast, while visualization of phase boundaries with high resolution is a strength of this method. Apart from a pulse-and-delay generator, the overall system cost is very low.
Full-color stereoscopic single-pixel camera based on DMD technology
NASA Astrophysics Data System (ADS)
Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús
2017-02-01
Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.
A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018
A coded structured light system based on primary color stripe projection and monochrome imaging.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-10-14
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3266, Ourzgan (519) and Moqur (520) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3464, Shahrak (411) and Kasi (412) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3362, Shin-Dand (415) and Tulak (416) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Natural-Color-Image Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a natural-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The natural colors were generated using calibrated red-, green-, and blue-wavelength Landsat image data, which were correlated with red, green, and blue values of corresponding picture elements in MODIS (Moderate Resolution Imaging Spectrometer) 'true color' mosaics of Afghanistan. These mosaics have been published on http://www.truecolorearth.com and modified to match more closely the Munsell colors of sampled surfaces. Peak elevations are derived from Shuttle Radar Topography Mission (SRTM) digital data, averaged over a pixel representing an area of 85 m2, and they are slightly lower than the highest corresponding local point. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Hippo in Super Resolution from Super Panorama
1998-07-03
This view of the "Hippo," 25 meters to the west of the lander, was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. These composites consist of more than 15 frames per eye (because multiple sequences covered the same area), taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01421
The research on multi-projection correction based on color coding grid array
NASA Astrophysics Data System (ADS)
Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu
2017-10-01
There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.
A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence
NASA Astrophysics Data System (ADS)
Liu, Hui; Jin, Cong
2017-03-01
In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.
Voyager: Neptune Encounter Highlights
NASA Technical Reports Server (NTRS)
1989-01-01
Voyager encounter data are presented in computer animation (CA) and real (R) animation. The highlights include a view of 2 full rotations of Neptune. It shows spacecraft trajectory 'diving' over Neptune and intercepting Triton's orbit, depicting radiation and occulation zones. Also shown are a renegade orbit of Triton and Voyager's encounter with Neptune's Magnetopause. A model of the spacecraft's complex maneuvers during close encounters of Neptune and Triton is presented. A view from Earth of Neptune's occulation experiment is is shown as well as a recreation of Voyager's final pass. There is detail of Voyager's Image Compensation technique which produces Voyager images. Eighteen images were produced on June 22 - 23, 1989, from 57 million miles away. A 68 day sequence which provides a stroboscopic view - colorization approximates what is seen by the human eye. Real time images recorded live from Voyager on 8/24/89 are presented. Photoclinometry produced the topography of Triton. Three images are used to create a sequence of Neptune's rings. The globe of Neptune and 2 views of the south pole are shown as well as Neptune rotating. The rotation of a scooter is frozen in images showing differential motion. There is a view of rotation of the Great Dark Spot about its own axis. Photoclinometry provides a 3-dimensional perspective using a color mosaic of Triton images. The globe is used to indicate the orientation of Neptune's crescent. The east and west plumes on Triton are shown.
NASA Astrophysics Data System (ADS)
1989-11-01
Voyager encounter data are presented in computer animation (CA) and real (R) animation. The highlights include a view of 2 full rotations of Neptune. It shows spacecraft trajectory 'diving' over Neptune and intercepting Triton's orbit, depicting radiation and occulation zones. Also shown are a renegade orbit of Triton and Voyager's encounter with Neptune's Magnetopause. A model of the spacecraft's complex maneuvers during close encounters of Neptune and Triton is presented. A view from Earth of Neptune's occulation experiment is is shown as well as a recreation of Voyager's final pass. There is detail of Voyager's Image Compensation technique which produces Voyager images. Eighteen images were produced on June 22 - 23, 1989, from 57 million miles away. A 68 day sequence which provides a stroboscopic view - colorization approximates what is seen by the human eye. Real time images recorded live from Voyager on 8/24/89 are presented. Photoclinometry produced the topography of Triton. Three images are used to create a sequence of Neptune's rings. The globe of Neptune and 2 views of the south pole are shown as well as Neptune rotating. The rotation of a scooter is frozen in images showing differential motion. There is a view of rotation of the Great Dark Spot about its own axis. Photoclinometry provides a 3-dimensional perspective using a color mosaic of Triton images. The globe is used to indicate the orientation of Neptune's crescent. The east and west plumes on Triton are shown.
Digital visual communications using a Perceptual Components Architecture
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1991-01-01
The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.
Calibration View of Earth and the Moon by Mars Color Imager
NASA Technical Reports Server (NTRS)
2005-01-01
Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils. The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results. The Earth and Moon images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to the Moon was about 1,440,000 kilometers (about 895,000 miles); the range to Earth was about 1,170,000 kilometers (about 727,000 miles). This view combines a sequence of frames showing the passage of Earth and the Moon across the field of view of a single color band of the Mars Color Imager. As the spacecraft slewed to view the two objects, they passed through the camera's field of view. Earth has been saturated white in this image so that both Earth and the Moon can be seen in the same frame. The Sun was coming from the left, so Earth and the Moon are seen in a quarter phase. Earth is on the left. The Moon appears briefly on the right. The Moon fades in and out; the Moon is only one pixel in size, and its fading is an artifact of the size and configuration of the light-sensitive pixels of the camera's charge-coupled device (CCD) detector.Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Fotometría infrarroja del Reloj de Arena en M8
NASA Astrophysics Data System (ADS)
Arias, J.; Barbá, R.; Morrell, N.; Rubio, M.
We present sub-arcsecond resolution JHKs imaging of the Hourglass Nebula in Messier 8, obtained with the 2.5-m du Pont telescope at Las Campanas Observatory (LCO), Chile. Near-infrared colors have been measured for numerous infrared sources around the O-type star Herschel 36 (O7 V), the brightest source in the field and main responsible for the nebula ionization. Several of those IR sources are identified as Hα emission stars from narrow-band Hubble Space Telescope images, and some of them display a knotty shape, characteristic of proplyd-like objects. Based on the NIR color-color and color-magnitude diagrams, we also identified dozens of NIR excess sources which %we selected as are prime candidates to be intermediate and low-mass pre-main-sequence stars. Additionally, we present preliminary results of the spectroscopic confirmation of some T Tauri stars among these objects, based on spectra recently obtained with the 6.5-m Magellan telescope at LCO.
Comparing Stellar Populations Across the Hubble Sequence
NASA Astrophysics Data System (ADS)
Loeffler, Shane; Kaleida, Catherine C.; Parkash, Vaishali
2015-01-01
Previous work (Jansen et al., 2000, Taylor et al., 2005) has revealed trends in the optical wavelength radial profiles of galaxies across the Hubble Sequence. Radial profiles offer insight into stellar populations, metallicity, and dust concentrations, aspects which are deeply tied to the individual evolution of a galaxy. The Nearby Field Galaxy Survey (NFGS) provides a sampling of nearby galaxies that spans the range of morphological types, luminosities, and masses. Currently available NFGS data includes optical radial surface profiles and spectra of 196 nearby galaxies. We aim to look for trends in the infrared portion of the spectrum for these galaxies, but find that existing 2MASS data is not sufficiently deep. Herein, we expand the available data for the NGFS galaxy IC1639 deeper into the infrared using new data taken with the Infrared Sideport Imager (ISPI) on the 4-m Blanco Telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Images taken in J, H, and Ks were reduced using standard IRAF and IDL procedures. Photometric calibrations were completed by using the highest quality (AAA) 2MASS stars in the field. Aperture photometry was then performed on the galaxy and radial profiles of surface brightness, J-H color, and H-Ks color were produced. For IC1639, the new ISPI data reveals flat color gradients and surface brightness gradients that decrease with radius. These trends reveal an archetypal elliptical galaxy, with a relatively homogeneous stellar population, stellar density decreasing with radius, and little-to-no obscuration by dust. We have obtained ISPI images for an additional 8 galaxies, and further reduction and analysis of these data will allow for investigation of radial trends in the infrared for galaxies across the Hubble Sequence.
Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.
2018-04-01
RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Özkan, Mutlu; Çelik, Ömer Faruk; Özyavaş, Aziz
2018-02-01
One of the most appropriate approaches to better understand and interpret geologic evolution of an accretionary complex is to make a detailed geologic map. The fact that ophiolite sequences consist of various rock types may require a unique image processing method to map each ophiolite body. The accretionary complex in the study area is composed mainly of ophiolitic and metamorphic rocks along with epi-ophiolitic sedimentary rocks. This paper attempts to map the Late Cretaceous accretionary complex in detail in northern Sivas (within İzmir-Ankara-Erzincan Suture Zone in Turkey) by the analysis of all of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) bands and field study. The new two hybrid color composite images yield satisfactory results in delineating peridotite, gabbro, basalt, and epi-ophiolitic sedimentary rocks of the accretionary complex in the study area. While the first hybrid color composite image consists of one principle component (PC) and two band ratios (PC1, 3/4, 4/6 in the RGB), the PC5, the original ASTER band 4 and the 3/4 band ratio images were assigned to the RGB colors to generate the second hybrid color composite image. In addition to that, the spectral indices derived from the ASTER thermal infrared (TIR) bands discriminate clearly ultramafic, siliceous, and carbonate rocks from adjacent lithologies at a regional scale. Peridotites with varying degrees of serpentinization illustrated as a single color were best identified in the spectral indices map. Furthermore, the boundaries of ophiolitic rocks based on fieldwork were outlined in detail in some parts of the study area by superimposing the resultant maps of ASTER maps on Google Earth images of finer spatial resolution. Eventually, the encouraging geologic map generated by the image analysis of ASTER data strongly correlates with lithological boundaries from a field survey.
2016-09-15
NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047
New adaptive color quantization method based on self-organizing maps.
Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai
2005-01-01
Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.
Identification of simple objects in image sequences
NASA Astrophysics Data System (ADS)
Geiselmann, Christoph; Hahn, Michael
1994-08-01
We present an investigation in the identification and location of simple objects in color image sequences. As an example the identification of traffic signs is discussed. Three aspects are of special interest. First regions have to be detected which may contain the object. The separation of those regions from the background can be based on color, motion, and contours. In the experiments all three possibilities are investigated. The second aspect focuses on the extraction of suitable features for the identification of the objects. For that purpose the border line of the region of interest is used. For planar objects a sufficient approximation of perspective projection is affine mapping. In consequence, it is near at hand to extract affine-invariant features from the border line. The investigation includes invariant features based on Fourier descriptors and moments. Finally, the object is identified by maximum likelihood classification. In the experiments all three basic object types are correctly identified. The probabilities for misclassification have been found to be below 1%
Morphology of the UV aurorae Jupiter during Juno's first perijove observations
NASA Astrophysics Data System (ADS)
Bonfond, B.; Gladstone, G. R.; Grodent, D.; Greathouse, T. K.; Versteeg, M. H.; Hue, V.; Davis, M. W.; Vogt, M. F.; Gérard, J.-C.; Radioti, A.; Bolton, S.; Levin, S. M.; Connerney, J. E. P.; Mauk, B. H.; Valek, P.; Adriani, A.; Kurth, W. S.
2017-05-01
On 27 August 2016, the NASA Juno spacecraft performed its first close-up observations of Jupiter during its perijove. Here we present the UV images and color ratio maps from the Juno-UVS UV imaging spectrograph acquired at that time. Data were acquired during four sequences (three in the north, one in the south) from 5:00 UT to 13:00 UT. From these observations, we produced complete maps of the Jovian aurorae, including the nightside. The sequence shows the development of intense outer emission outside the main oval, first in a localized region (255°-295° System III longitude) and then all around the pole, followed by a large nightside protrusion of auroral emissions from the main emission into the polar region. Some localized features show signs of differential drift with energy, typical of plasma injections in the middle magnetosphere. Finally, the color-ratio map in the north shows a well-defined area in the polar region possibly linked to the polar cap.
Jolliff, B.; Knoll, A.; Morris, R.V.; Moersch, J.; McSween, H.; Gilmore, M.; Arvidson, R.; Greeley, R.; Herkenhoff, K.; Squyres, S.
2002-01-01
Blind field tests of the Field Integration Design and Operations (FIDO) prototype Mars rover were carried out 7-16 May 2000. A Core Operations Team (COT), sequestered at the Jet Propulsion Laboratory without knowledge of test site location, prepared command sequences and interpreted data acquired by the rover. Instrument sensors included a stereo panoramic camera, navigational and hazard-avoidance cameras, a color microscopic imager, an infrared point spectrometer, and a rock coring drill. The COT designed command sequences, which were relayed by satellite uplink to the rover, and evaluated instrument data. Using aerial photos and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, and information from the rover sensors, the COT inferred the geology of the landing site during the 18 sol mission, including lithologic diversity, stratigraphic relationships, environments of deposition, and weathering characteristics. Prominent lithologic units were interpreted to be dolomite-bearing rocks, kaolinite-bearing altered felsic volcanic materials, and basalt. The color panoramic camera revealed sedimentary layering and rock textures, and geologic relationships seen in rock exposures. The infrared point spectrometer permitted identification of prominent carbonate and kaolinite spectral features and permitted correlations to outcrops that could not be reached by the rover. The color microscopic imager revealed fine-scale rock textures, soil components, and results of coring experiments. Test results show that close-up interrogation of rocks is essential to investigations of geologic environments and that observations must include scales ranging from individual boulders and outcrops (microscopic, macroscopic) to orbital remote sensing, with sufficient intermediate steps (descent images) to connect in situ and remote observations.
The Galaxy Color-Magnitude Diagram in the Local Universe from GALEX and SDSS Data
NASA Astrophysics Data System (ADS)
Wyder, T. K.; GALEX Science Team
2005-12-01
We present the relative density of galaxies in the local universe as a function of their r-band absolute magnitudes and ultraviolet minus r-band colors. The Sloan Digital Sky Survey (SDSS) main galaxy sample selected in the r-band was matched with a sample of galaxies from the Galaxy Evolution Explorer (GALEX) Medium Imaging Survey in both the far-UV (FUV) and near-UV (NUV) bands. Simlar to previous optical studies, the distribution of galaxies in (FUV-r) and (NUV-r) is bimodal with well-defined blue and red sequences. We compare the distribution of galaxies in these colors with both the D4000 index measured from the SDSS spectra as well as the SDSS (u-r) color.
2015-12-10
This enhanced color mosaic combines some of the sharpest views of Pluto that NASA's New Horizons spacecraft obtained during its July 14 flyby. The pictures are part of a sequence taken near New Horizons' closest approach to Pluto, with resolutions of about 250-280 feet (77-85 meters) per pixel -- revealing features smaller than half a city block on Pluto's surface. Lower resolution color data (at about 2,066 feet, or 630 meters, per pixel) were added to create this new image. The images form a strip 50 miles (80 kilometers) wide, trending (top to bottom) from the edge of "badlands" northwest of the informally named Sputnik Planum, across the al-Idrisi mountains, onto the shoreline of Pluto's "heart" feature, and just into its icy plains. They combine pictures from the telescopic Long Range Reconnaissance Imager (LORRI) taken approximately 15 minutes before New Horizons' closest approach to Pluto, with -- from a range of only 10,000 miles (17,000 kilometers) -- with color data (in near-infrared, red and blue) gathered by the Ralph/Multispectral Visible Imaging Camera (MVIC) 25 minutes before the LORRI pictures. The wide variety of cratered, mountainous and glacial terrains seen here gives scientists and the public alike a breathtaking, super-high-resolution color window into Pluto's geology. e border between the relatively smooth Sputnik Planum ice sheet and the pitted area, with a series of hills forming slightly inside this unusual "shoreline." http://photojournal.jpl.nasa.gov/catalog/PIA20213
Kolanko, C J; Pyle, M D; Nath, J; Prasanna, P G; Loats, H; Blakely, W F
2000-03-01
We report a low cost and efficient method for synthesizing a human pancentromeric DNA probe by the polymerase chain reaction (PRC) and an optimized protocol for in situ detection using color pigment immunostaining. The DNA template used in the PCR was a 2.4 kb insert containing human alphoid repeated sequences of pancentromeric DNA subcloned into pUC9 (Miller et al. 1988) and the primers hybridized to internal sequences of the 172 bp consensus tandem repeat associated with human centromeres. PCR was performed in the presence of biotin-11-dUTP, and the product was used for in situ hybridization to detect the pancentromeric region of human chromosomes in metaphase spreads. Detection of pancentromeric probe was achieved by immunoenzymatic color pigment painting to yield a permanent image detected at high resolution by bright field microscopy. The ability to synthesize the centromeric probe rapidly and to detect it with color pigment immunostaining will lead to enhanced identification and eventually to automation of various chromosome aberration assays.
Viking Imaging of Phobos and Deimos: An Overview of the Primary Mission
NASA Technical Reports Server (NTRS)
Duxbury, T. C.; Veverka, J.
1977-01-01
During the Viking primary mission the cameras on the two orbiters acquired about 50 pictures of the two Martian moons. The Viking images of the satellites have a higher surface resolution than those obtained by Mariner 9. The typical surface resolution achieved was 100-200 m, although detail as small as 40 m was imaged on Phobos during a particularly close passage. Attention is given to color sequences obtained for each satellite, aspects of phase angle coverage, and pictures for ephemeris improvement.
Teichmann, A Lina; Nieuwenstein, Mark R; Rich, Anina N
2015-01-01
Digit-color synesthetes report experiencing colors when perceiving letters and digits. The conscious experience is typically unidirectional (e.g., digits elicit colors but not vice versa) but recent evidence shows subtle bidirectional effects. We examined whether short-term memory for colors could be affected by the order of presentation reflecting more or less structure in the associated digits. We presented a stream of colored squares and asked participants to report the colors in order. The colors matched each synesthete's colors for digits 1-9 and the order of the colors corresponded either to a sequence of numbers (e.g., [red, green, blue] if 1 = red, 2 = green, 3 = blue) or no systematic sequence. The results showed that synesthetes recalled sequential color sequences more accurately than pseudo-randomized colors, whereas no such effect was found for the non-synesthetic controls. Synesthetes did not differ from non-synesthetic controls in recall of color sequences overall, providing no evidence of a general advantage in memory for serial recall of colors.
Joint denoising, demosaicing, and chromatic aberration correction for UHD video
NASA Astrophysics Data System (ADS)
Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank
2017-09-01
High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
Six-color solid state illuminator for cinema projector
NASA Astrophysics Data System (ADS)
Huang, Junejei; Wang, Yuchang
2014-09-01
Light source for cinema projector requires reliability, high brightness, good color and 3D for without silver screens. To meet these requirements, a laser-phosphor based solid state illuminator with 6 primary colors is proposed. The six primary colors are divided into two groups and include colors of R1, R2, G1, G2, B1 and B2. Colors of B1, B2 and R2 come from lasers of wavelengths 440 nm, 465 nm and 639 nm. Color of G1 comes from G-phosphor pumped by B2 laser. Colors of G2 and R1 come from Y-phosphor pumped by B1 laser. Two groups of colors are combined by a multiband filter and working by alternately switching B1 and B2 lasers. The combined two sequences of three colors are sent to the 3-chip cinema projector and synchronized with frame rate of 120Hz. In 2D mode, the resulting 6 primary colors provide a very wide color gamut. In 3D mode, two groups of red, green and blue primary colors provide two groups of images that received by left and right eyes.
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
Anazawa, Takashi; Yamazaki, Motohiro
2017-12-05
Although multi-point, multi-color fluorescence-detection systems are widely used in various sciences, they would find wider applications if they are miniaturized. Accordingly, an ultra-small, four-emission-point and four-color fluorescence-detection system was developed. Its size (space between emission points and a detection plane) is 15 × 10 × 12 mm, which is three-orders-of-magnitude smaller than that of a conventional system. Fluorescence from four emission points with an interval of 1 mm on the same plane was respectively collimated by four lenses and split into four color fluxes by four dichroic mirrors. Then, a total of sixteen parallel color fluxes were directly input into an image sensor and simultaneously detected. The emission-point plane and the detection plane (the image-sensor surface) were parallel and separated by a distance of only 12 mm. The developed system was applied to four-capillary array electrophoresis and successfully achieved Sanger DNA sequencing. Moreover, compared with a conventional system, the developed system had equivalent high fluorescence-detection sensitivity (lower detection limit of 17 pM dROX) and 1.6-orders-of-magnitude higher dynamic range (4.3 orders of magnitude).
Dark and Bright Terrains of Pluto
2015-07-10
These circular maps shows the distribution of Pluto's dark and bright terrains as revealed by NASA's New Horizons mission prior to July 4, 2015. Each map is an azimuthal equidistant projection centered on the north pole, with latitude and longitude indicated. Both a gray-scale and color version are shown. The gray-scale version is based on 7 days of panchromatic imaging from the Long Range Reconnaissance Imager (LORRI), whereas the color version uses the gray-scale base and incorporates lower-resolution color information from the Multi-spectral Visible Imaging Camera (MVIC), part of the Ralph instrument. The color version is also shown in a simple cylindrical projection in PIA19700. In these maps, the polar bright terrain is surrounded by a somewhat darker polar fringe, one whose latitudinal position varies strongly with longitude. Especially striking are the much darker regions along the equator. A broad dark swath ("the whale") stretches along the equator from approximately 20 to 160 degrees of longitude. Several dark patches appear in a regular sequence centered near 345 degrees of longitude. A spectacular bright region occupies Pluto's mid-latitudes near 180 degrees of longitude, and stretches southward over the equator. New Horizons' closest approach to Pluto will occur near this longitude, which will permit high-resolution visible imaging and compositional mapping of these various regions. http://photojournal.jpl.nasa.gov/catalog/PIA19706
Accurate multiplex polony sequencing of an evolved bacterial genome.
Shendure, Jay; Porreca, Gregory J; Reppas, Nikos B; Lin, Xiaoxia; McCutcheon, John P; Rosenbaum, Abraham M; Wang, Michael D; Zhang, Kun; Mitra, Robi D; Church, George M
2005-09-09
We describe a DNA sequencing technology in which a commonly available, inexpensive epifluorescence microscope is converted to rapid nonelectrophoretic DNA sequencing automation. We apply this technology to resequence an evolved strain of Escherichia coli at less than one error per million consensus bases. A cell-free, mate-paired library provided single DNA molecules that were amplified in parallel to 1-micrometer beads by emulsion polymerase chain reaction. Millions of beads were immobilized in a polyacrylamide gel and subjected to automated cycles of sequencing by ligation and four-color imaging. Cost per base was roughly one-ninth as much as that of conventional sequencing. Our protocols were implemented with off-the-shelf instrumentation and reagents.
NASA Technical Reports Server (NTRS)
Paradella, W. R. (Principal Investigator); Vitorello, I.; Monteiro, M. D.
1984-01-01
Enhancement techniques and thematic classifications were applied to the metasediments of Bambui Super Group (Upper Proterozoic) in the Region of Serra do Ramalho, SW of the state of Bahia. Linear contrast stretch, band-ratios with contrast stretch, and color-composites allow lithological discriminations. The effects of human activities and of vegetation cover mask and limit, in several ways, the lithological discrimination with digital MSS data. Principal component images and color composite of linear contrast stretch of these products, show lithological discrimination through tonal gradations. This set of products allows the delineations of several metasedimentary sequences to a level superior to reconnaissance mapping. Supervised (maximum likelihood classifier) and nonsupervised (K-Means classifier) classification of the limestone sequence, host to fluorite mineralization show satisfactory results.
Removing flicker based on sparse color correspondences in old film restoration
NASA Astrophysics Data System (ADS)
Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran
2018-04-01
In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.
Teichmann, A Lina; Nieuwenstein, Mark R; Rich, Anina N
2017-08-01
For digit-color synaesthetes, digits elicit vivid experiences of color that are highly consistent for each individual. The conscious experience of synaesthesia is typically unidirectional: Digits evoke colors but not vice versa. There is an ongoing debate about whether synaesthetes have a memory advantage over non-synaesthetes. One key question in this debate is whether synaesthetes have a general superiority or whether any benefit is specific to a certain type of material. Here, we focus on immediate serial recall and ask digit-color synaesthetes and controls to memorize digit and color sequences. We developed a sensitive staircase method manipulating presentation duration to measure participants' serial recall of both overlearned and novel sequences. Our results show that synaesthetes can activate digit information to enhance serial memory for color sequences. When color sequences corresponded to ascending or descending digit sequences, synaesthetes encoded these sequences at a faster rate than their non-synaesthetes counterparts and faster than non-structured color sequences. However, encoding color sequences is approximately 200 ms slower than encoding digit sequences directly, independent of group and condition, which shows that the translation process is time consuming. These results suggest memory advantages in synaesthesia require a modified dual-coding account, in which secondary (synaesthetically linked) information is useful only if it is more memorable than the primary information to be recalled. Our study further shows that duration thresholds are a sensitive method to measure subtle differences in serial recall performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.
Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho
2016-04-18
We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.
Barnacle Bill in Super Resolution from Insurance Panorama
NASA Technical Reports Server (NTRS)
1998-01-01
Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.
This view of Barnacle Bill was produced by combining the 'Insurance Pan' frames taken while the IMP camera was still in its stowed position on sol2. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The right eye composite consists of 5 frames, taken with different color filters, the left eye consists of only 1 frame. The resultant image from each eye was enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars.The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument.NASA Technical Reports Server (NTRS)
1997-01-01
The images used to create this color composite of Io were acquired by Galileo during its ninth orbit (C9) of Jupiter and are part of a sequence of images designed to map the topography or relief on Io and to monitor changes in the surface color due to volcanic activity. Obtaining images at low illumination angles is like taking a picture from a high altitude around sunrise or sunset. Such lighting conditions emphasize the topography of the volcanic satellite. Several mountains up to a few miles high can be seen in this view, especially near the upper right. Some of these mountains appear to be tilted crustal blocks. Most of the dark spots correspond to active volcanic centers.
North is to the top of the picture which merges images obtained with the clear, red, green, and violet filters of the solid state imaging (CCD) system on NASA's Galileo spacecraft. . The resolution is 8.3 kilometers per picture element. The image was taken on June 27, 1997 at a range of 817,000 kilometers by the solid state imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoColor preference and familiarity in performance on brand logo recall.
Huang, Kuo-Chen; Lin, Chin-Chiuan; Chiang, Shu-Ying
2008-10-01
Two experiments assessed effects of color preference and brand-logo familiarity on recall performance. Exp. 1 explored the color preferences, using a forced-choice technique, of 189 women and 63 men, Taiwanese college students ages 18 to 20 years (M = 19.4, SD = 1.5). The sequence of the three most preferred colors was white, light blue, and black and of the three least preferred colors was light orange, dark violet, and dark brown. Exp. 2 investigated the effects of color preference based on the results of Exp. 1 and brand-logo familiarity on recall. A total of 27 women and 21 men, Taiwanese college students ages 18 to 20 years (M = 19.2, SD = 1.2) participated. They memorized a list of 24 logos (four logos shown in six colors) and then performed sequential recall. Analyses showed color preference significantly affected recall accuracy. Accuracy for high color preference was significantly greater than that for low preferences. Results showed no significant effects of brand-logo familiarity or sex on accuracy. In addition, the interactive effect of color preference and brand-logo familiarity on accuracy was significant. These results have implications for the design of brand logos to create and sustain memory of brand images.
Barnacle Bill in Super Resolution from Super Panorama
1998-07-03
"Barnacle Bill" is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. This view of Barnacle Bill was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of these rocks and what it might tell us about their mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The composites consist of 7 frames in the right eye and 8 frames in the left eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01409
Incidental orthographic learning during a color detection task.
Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R
2017-09-01
Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Spread-Spectrum Beamforming and Clutter Filtering for Plane-Wave Color Doppler Imaging.
Mansour, Omar; Poepping, Tamie L; Lacefield, James C
2016-07-21
Plane-wave imaging is desirable for its ability to achieve high frame rates, allowing the capture of fast dynamic events and continuous Doppler data. In most implementations of plane-wave imaging, multiple low-resolution images from different plane wave tilt angles are compounded to form a single high-resolution image, thereby reducing the frame rate. Compounding improves the lateral beam profile in the high-resolution image, but it also acts as a low-pass filter in slow time that causes attenuation and aliasing of signals with high Doppler shifts. This paper introduces a spread-spectrum color Doppler imaging method that produces high-resolution images without the use of compounding, thereby eliminating the tradeoff between beam quality, maximum unaliased Doppler frequency, and frame rate. The method uses a long, random sequence of transmit angles rather than a linear sweep of plane wave directions. The random angle sequence randomizes the phase of off-focus (clutter) signals, thereby spreading the clutter power in the Doppler spectrum, while keeping the spectrum of the in-focus signal intact. The ensemble of randomly tilted low-resolution frames also acts as the Doppler ensemble, so it can be much longer than a conventional linear sweep, thereby improving beam formation while also making the slow-time Doppler sampling frequency equal to the pulse repetition frequency. Experiments performed using a carotid artery phantom with constant flow demonstrate that the spread-spectrum method more accurately measures the parabolic flow profile of the vessel and outperforms conventional plane-wave Doppler in both contrast resolution and estimation of high flow velocities. The spread-spectrum method is expected to be valuable for Doppler applications that require measurement of high velocities at high frame rates.
Soccer player recognition by pixel classification in a hybrid color space
NASA Astrophysics Data System (ADS)
Vandenbroucke, Nicolas; Macaire, Ludovic; Postaire, Jack-Gerard
1997-08-01
Soccer is a very popular sport all over the world, Coaches and sport commentators need accurate information about soccer games, especially about the players behavior. These information can be gathered by inspectors who watch the soccer match and report manually the actions of the players involved in the principal phases of the game. Generally, these inspectors focus their attention on the few players standing near the ball and don't report about the motion of all the other players. So it seems desirable to design a system which automatically tracks all the players in real- time. That's why we propose to automatically track each player through the successive color images of the sequences acquired by a fixed color camera. Each player which is present in the image, is modelized by an active contour model or snake. When, during the soccer match, a player is hidden by another, the snakes which track these two players merge. So, it becomes impossible to track the players, except if the snakes are interactively re-initialized. Fortunately, in most cases, the two players don't belong to the same team. That is why we present an algorithm which recognizes the teams of the players by pixels representing the soccer ground which must be withdrawn before considering the players themselves. To eliminate these pixels, the color characteristics of the ground are determined interactively. In a second step, dealing with windows containing only one player of one team, the color features which yield the best discrimination between the two teams are selected. Thanks to these color features, the pixels associated to the players of the two teams form two separated clusters into a color space. In fact, there are many color representation systems and it's interesting to evaluate the features which provide the best separation between the two classes of pixels according to the players soccer suit. Finally, the classification process for image segmentation is based on the three most discriminating color features which define the coordinates of each pixel in an 'hybrid color space.' Thanks to this hybrid color representation, each pixel can be assigned to one of the two classes by a minimum distance classification.
Time-lapse Sequence of Jupiter's South Pole
2018-02-22
This series of images captures cloud patterns near Jupiter's south pole, looking up towards the planet's equator. NASA's Juno spacecraft took the color-enhanced time-lapse sequence of images during its eleventh close flyby of the gas giant planet on Feb. 7 between 7:21 a.m. and 8:01 a.m. PST (10:21 a.m. and 11:01 a.m. EST). At the time, the spacecraft was between 85,292 to 124,856 miles (137,264 to 200,937 kilometers) from the tops of the clouds of the planet with the images centered on latitudes from 84.1 to 75.5 degrees south. At first glance, the series might appear to be the same image repeated. But closer inspection reveals slight changes, which are most easily noticed by comparing the far left image with the far right image. Directly, the images show Jupiter. But, through slight variations in the images, they indirectly capture the motion of the Juno spacecraft itself, once again swinging around a giant planet hundreds of millions of miles from Earth. https://photojournal.jpl.nasa.gov/catalog/PIA21979
NASA Astrophysics Data System (ADS)
Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald
2009-08-01
The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.
Dynamics of distribution and density of phreatophytes and other arid land plant communities
NASA Technical Reports Server (NTRS)
Turner, R. M. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Ground truth measurements of plant coverage on six satellite overflight dates reveal unique trends in coverage for the five desert or semi-desert communities selected. Densitometry and multispectral additive color viewing were used in a preliminary analysis of imagery using the electronic satellite image analyzer console at Stanford Research Institute. The densitometric analysis shows promise for mapping boundaries between plant communities. Color additive viewing of a chronologic sequence of the same scene shown in rapid order will provide a method for mapping phreatophyte communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hachisu, Izumi; Kato, Mariko, E-mail: hachisu@ea.c.u-tokyo.ac.jp, E-mail: mariko@educ.cc.keio.ac.jp
We identified a general course of classical nova outbursts in the B – V versus U – B color-color diagram. It is reported that novae show spectra similar to those of A-F supergiants near optical light maximum. However, they do not follow the supergiant sequence in the color-color diagram, neither the blackbody nor the main-sequence sequence. Instead, we found that novae evolve along a new sequence in the pre-maximum and near-maximum phases, which we call 'the nova-giant sequence'. This sequence is parallel to but Δ(U – B) ≈ –0.2 mag bluer than the supergiant sequence. This is because the massmore » of a nova envelope is much (∼10{sup –4} times) less than that of a normal supergiant. After optical maximum, its color quickly evolves back blueward along the same nova-giant sequence and reaches the point of free-free emission (B – V = –0.03, U – B = –0.97), which coincides with the intersection of the blackbody sequence and the nova-giant sequence, and remains there for a while. Then the color evolves leftward (blueward in B – V but almost constant in U – B), owing mainly to the development of strong emission lines. This is the general course of nova outbursts in the color-color diagram, which was deduced from eight well-observed novae in various speed classes. For a nova with unknown extinction, we can determine a reliable value of the color excess by matching the observed track of the target nova with this general course. This is a new and convenient method for obtaining the color excesses of classical novae. Using this method, we redetermined the color excesses of 20 well-observed novae. The obtained color excesses are in reasonable agreement with the previous results, which in turn support the idea of our general track of nova outbursts. Additionally, we estimated the absolute V magnitudes of about 30 novae using a method for time-stretching nova light curves to analyze the distance-reddening relations of the novae.« less
NASA Astrophysics Data System (ADS)
Powalka, Mathieu; Lançon, Ariane; Puzia, Thomas H.; Peng, Eric W.; Liu, Chengze; Muñoz, Roberto P.; Blakeslee, John P.; Côté, Patrick; Ferrarese, Laura; Roediger, Joel; Sánchez-Janssen, Rúben; Zhang, Hongxin; Durrell, Patrick R.; Cuillandre, Jean-Charles; Duc, Pierre-Alain; Guhathakurta, Puragra; Gwyn, S. D. J.; Hudelot, Patrick; Mei, Simona; Toloba, Elisa
2016-11-01
The central region of the Virgo Cluster of galaxies contains thousands of globular clusters (GCs), an order of magnitude more than the number of clusters found in the Local Group. Relics of early star formation epochs in the universe, these GCs also provide ideal targets to test our understanding of the spectral energy distributions (SEDs) of old stellar populations. Based on photometric data from the Next Generation Virgo Cluster Survey (NGVS) and its near-infrared counterpart NGVS-IR, we select a robust sample of ≈ 2000 GCs with excellent photometry and tha span the full range of colors present in the Virgo core. The selection exploits the well-defined locus of GCs in the uiK diagram and the fact that the GCs are marginally resolved in the images. We show that the GCs define a narrow sequence in five-dimensional color space, with limited but real dispersion around the mean sequence. The comparison of these SEDs with the predictions of 11 widely used population synthesis models highlights differences between the models and also shows that no single model adequately matches the data in all colors. We discuss possible causes for some of these discrepancies. Forthcoming papers of this series will examine how best to estimate photometric metallicities in this context, and compare the Virgo GC colors with those in other environments.
Detection of microRNAs in color space.
Marco, Antonio; Griffiths-Jones, Sam
2012-02-01
Deep sequencing provides inexpensive opportunities to characterize the transcriptional diversity of known genomes. The AB SOLiD technology generates millions of short sequencing reads in color-space; that is, the raw data is a sequence of colors, where each color represents 2 nt and each nucleotide is represented by two consecutive colors. This strategy is purported to have several advantages, including increased ability to distinguish sequencing errors from polymorphisms. Several programs have been developed to map short reads to genomes in color space. However, a number of previously unexplored technical issues arise when using SOLiD technology to characterize microRNAs. Here we explore these technical difficulties. First, since the sequenced reads are longer than the biological sequences, every read is expected to contain linker fragments. The color-calling error rate increases toward the 3(') end of the read such that recognizing the linker sequence for removal becomes problematic. Second, mapping in color space may lead to the loss of the first nucleotide of each read. We propose a sequential trimming and mapping approach to map small RNAs. Using our strategy, we reanalyze three published insect small RNA deep sequencing datasets and characterize 22 new microRNAs. A bash shell script to perform the sequential trimming and mapping procedure, called SeqTrimMap, is available at: http://www.mirbase.org/tools/seqtrimmap/ antonio.marco@manchester.ac.uk Supplementary data are available at Bioinformatics online.
Near-Infrared Coloring via a Contrast-Preserving Mapping Model.
Chang-Hwan Son; Xiao-Ping Zhang
2017-11-01
Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.
Computer aided detection of tumor and edema in brain FLAIR magnetic resonance image using ANN
NASA Astrophysics Data System (ADS)
Pradhan, Nandita; Sinha, A. K.
2008-03-01
This paper presents an efficient region based segmentation technique for detecting pathological tissues (Tumor & Edema) of brain using fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. This work segments FLAIR brain images for normal and pathological tissues based on statistical features and wavelet transform coefficients using k-means algorithm. The image is divided into small blocks of 4×4 pixels. The k-means algorithm is used to cluster the image based on the feature vectors of blocks forming different classes representing different regions in the whole image. With the knowledge of the feature vectors of different segmented regions, supervised technique is used to train Artificial Neural Network using fuzzy back propagation algorithm (FBPA). Segmentation for detecting healthy tissues and tumors has been reported by several researchers by using conventional MRI sequences like T1, T2 and PD weighted sequences. This work successfully presents segmentation of healthy and pathological tissues (both Tumors and Edema) using FLAIR images. At the end pseudo coloring of segmented and classified regions are done for better human visualization.
Investigating Open Clusters Melotte 111 and NGC 6811
NASA Astrophysics Data System (ADS)
Gunshefski, Linda; Paust, Nathaniel E. Q.; van Belle, Gerard
2018-01-01
We present photometry and color-magnitude diagrams for the open clusters Melotte 111 (Coma Bernices) and NGC 6811. These clusters were observed with Lowell Observatory’s Discovery Channel Telescope Large Monolithic Imager in the V and I bands. The images were reduced with IRAF and photometry was performed with DAOPHOT/ALLSTAR. The resulting photometry extends many magnitudes below the main sequence turnoff. Both clusters are located nearby, (Melotte 111 d=86 pc and NGC 6811 d=1,107) and are evolutionarily young (Melotte 111, age=450 Myr and NGC 6811, age=1,000 Myr). This work marks the first step of a project to determine the cluster main sequence mass functions and examine how the mass functions evolve in young stellar populations.
NASA Astrophysics Data System (ADS)
Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick
2016-12-01
Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.
Introduction to Color Imaging Science
NASA Astrophysics Data System (ADS)
Lee, Hsien-Che
2005-04-01
Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.
Scherer, N M; Basso, D M
2008-09-16
DNATagger is a web-based tool for coloring and editing DNA, RNA and protein sequences and alignments. It is dedicated to the visualization of protein coding sequences and also protein sequence alignments to facilitate the comprehension of evolutionary processes in sequence analysis. The distinctive feature of DNATagger is the use of codons as informative units for coloring DNA and RNA sequences. The codons are colored according to their corresponding amino acids. It is the first program that colors codons in DNA sequences without being affected by "out-of-frame" gaps of alignments. It can handle single gaps and gaps inside the triplets. The program also provides the possibility to edit the alignments and change color patterns and translation tables. DNATagger is a JavaScript application, following the W3C guidelines, designed to work on standards-compliant web browsers. It therefore requires no installation and is platform independent. The web-based DNATagger is available as free and open source software at http://www.inf.ufrgs.br/~dmbasso/dnatagger/.
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
Panoramic Views of the Landing site from Sagan Memorial Station
NASA Technical Reports Server (NTRS)
1997-01-01
Each of these panoramic views is a controlled mosaic of approximately 300 IMP images covering 360 degrees of azimuth and elevations from approximately 4 degrees above the horizon to 45 degrees below it. Simultaneous adjustment of orientations of all images has been performed to minimize discontinuities between images. Mosaics have been highpass-filtered and contrast-enhanced to improve discrimination of details without distorting relative colors overall.
TOP IMAGE: Enhanced true-color image created from the 'Gallery Pan' sequence, acquired on sols 8-10 so that local solar time increases nearly continuously from about 10:00 at the right edge to about 12:00 at the left. Mosaics of images obtained by the right camera through 670 nm, 530 nm, and 440 nm filters were used as red, green and blue channels. Grid ticks indicate azimuth clockwise from north in 30 degree increments and elevation in 15 degree increments.BOTTOM IMAGE: Anaglyphic stereoimage created from the 'monster pan' sequence, acquired in four sections between about 8:30 and 15:00 local solar time on sol 3. Mosaics of images obtained through the 670 nm filter (left camera) and 530 and 440 nm filters (right camera) were used where available. At the top and bottom, left- and right-camera 670 nm images were used. Part of the northern horizon was not imaged because of the tilt of the lander. This image may be viewed stereoscopically through glasses with a red filter for the left eye and a cyan filter for the right eye.NOTE: original caption as published in Science MagazineMars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).Image indexing using color correlograms
Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing
2001-01-01
A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.
2015-07-27
remapped to obtain a dynamic image sequence with a natural color appearance (Hogervorst et al ., 2006 ; Hogervorst & Toet, 2010). 2.1.2 Motion...to enhance the visibility of low‐amplitude temporal (color or location) changes in a standard video sequences (Wadhwa et al ., 2013). The method...filter scheme, which is highly suitable to fuse multispectral imagery (Koren et al ., 1995; Liu et al ., 2001), it can in principle be incorporated into
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Design of multi-mode compatible image acquisition system for HD area array CCD
NASA Astrophysics Data System (ADS)
Wang, Chen; Sui, Xiubao
2014-11-01
Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.
Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea
NASA Astrophysics Data System (ADS)
Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.
2018-04-01
Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.
A dual-channel fusion system of visual and infrared images based on color transfer
NASA Astrophysics Data System (ADS)
Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong
2013-09-01
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.
Fabrication of a New Lineage of Artificial Luciferases from Natural Luciferase Pools.
Kim, Sung Bae; Nishihara, Ryo; Citterio, Daniel; Suzuki, Koji
2017-09-11
The fabrication of artificial luciferases (ALucs) with unique optical properties has a fundamental impact on bioassays and molecular imaging. In this study, we developed a new lineage of ALucs with unique substrate preferences by extracting consensus amino acids from the alignment of 25 copepod luciferase sequences available in natural luciferase pools. The primary sequence was first created with a sequence logo generator resulting in a total of 11 sibling sequences. Phylogenetic analysis shows that the newly fabricated ALucs form an independent branch, genetically isolated from the natural luciferases, and from a prior series of ALucs produced by our laboratory using a smaller basis set. The new lineage of ALucs were strongly luminescent in living mammalian cells with specific substrate selectivity to native coelenterazine. A single-residue-level comparison of the C-terminal sequences of new ALucs reveals that some amino acids in the C-terminal ends are greatly influential on the optical intensities but limited in the color variance. The success of this approach guides on how to engineer and functionalize marine luciferases for bioluminescence imaging and assays.
Comparing the white dwarf cooling sequences in 47 Tuc and NGC 6397
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richer, Harvey B.; Goldsbury, Ryan; Heyl, Jeremy
2013-12-01
Using deep Hubble Space Telescope imaging, color-magnitude diagrams are constructed for the globular clusters 47 Tuc and NGC 6397. As expected, because of its lower metal abundance, the main sequence of NGC 6397 lies well to the blue of that of 47 Tuc. A comparison of the white dwarf cooling sequences of the two clusters, however, demonstrates that these sequences are indistinguishable over most of their loci—a consequence of the settling out of heavy elements in the dense white dwarf atmosphere and the near equality of their masses. Lower quality data on M4 continues this trend to a third clustermore » whose metallicity is intermediate between these two. While the path of the white dwarfs in the color-magnitude diagram is nearly identical in 47 Tuc and NGC 6397, the numbers of white dwarfs along the path are not. This results from the relatively rapid relaxation in NGC 6397 compared to 47 Tuc and provides a cautionary note that simply counting objects in star clusters in random locations as a method of testing stellar evolutionary theory is likely dangerous unless dynamical considerations are included.« less
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
2000-07-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1348 TITLE: Internet Color Imaging DISTRIBUTION: Approved for public...Paper Internet Color Imaging Hsien-Che Lee Imaging Science and Technology Laboratory Eastman Kodak Company, Rochester, New York 14650-1816 USA...ABSTRACT The sharing and exchange of color images over the Internet pose very challenging problems to color science and technology . Emerging color standards
Multimodal digital color imaging system for facial skin lesion analysis
NASA Astrophysics Data System (ADS)
Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo
2008-02-01
In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
Spreadsheet macros for coloring sequence alignments.
Haygood, M G
1993-12-01
This article describes a set of Microsoft Excel macros designed to color amino acid and nucleotide sequence alignments for review and preparation of visual aids. The colored alignments can then be modified to emphasize features of interest. Procedures for importing and coloring sequences are described. The macro file adds a new menu to the menu bar containing sequence-related commands to enable users unfamiliar with Excel to use the macros more readily. The macros were designed for use with Macintosh computers but will also run with the DOS version of Excel.
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
NASA Astrophysics Data System (ADS)
Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao
2016-12-01
This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.
Color reproduction and processing algorithm based on real-time mapping for endoscopic images.
Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A
2016-01-01
In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.
Color segmentation in the HSI color space using the K-means algorithm
NASA Astrophysics Data System (ADS)
Weeks, Arthur R.; Hague, G. Eric
1997-04-01
Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
Influence of imaging resolution on color fidelity in digital archiving.
Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari
2015-11-01
Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
False-Color-Image Map of Quadrangle 3362, Shin-Dand (415) and Tulak (416) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
False-Color-Image Map of Quadrangle 3670, Jarm-Keshem (223) and Zebak (224) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
False-Color-Image Map of Quadrangle 3166, Jaldak (701) and Maruf-Nawa (702) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
False-Color-Image Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
False-Color-Image Map of Quadrangle 3364, Pasa-Band (417) and Kejran (418) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
False-Color-Image Map of Quadrangle 3464, Shahrak (411) and Kasi (412) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
False-Color-Image Map of Quadrangle 3266, Ourzgan (519) and Moqur (520) Quadrangles, Afghanistan
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Davis, Philip A.; Turner, Kenzie J.
2007-01-01
This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.
Image subregion querying using color correlograms
Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing
2002-01-01
A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.
Multiple Auto-Adapting Color Balancing for Large Number of Images
NASA Astrophysics Data System (ADS)
Zhou, X.
2015-04-01
This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.
Color standardization in whole slide imaging using a color calibration slide
Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako
2014-01-01
Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739
Automatic color preference correction for color reproduction
NASA Astrophysics Data System (ADS)
Tsukada, Masato; Funayama, Chisato; Tajima, Johji
2000-12-01
The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.
Analysis of ERTS imagery using special electronic viewing/measuring equipment
NASA Technical Reports Server (NTRS)
Evans, W. E.; Serebreny, S. M.
1973-01-01
An electronic satellite image analysis console (ESIAC) is being employed to process imagery for use by USGS investigators in several different disciplines studying dynamic hydrologic conditions. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. Quantitative measurements of distances, areas, and brightness profiles can be extracted digitally under operator supervision. Initial results are presented for the display and measurement of snowfield extent, glacier development, sediment plumes from estuary discharge, playa inventory, phreatophyte and other vegetative changes.
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo
2014-09-01
We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.
1998-06-08
A color image of the Tyrrhena Patera Region of Mars; north toward top. The scene shows a central circular depression surrounded by circular fractures and highly dissected horizontal sheets. A patera (Latin for shallow dish or saucer) is a volcano of broad areal extent with little vertical relief. This image is a composite of Viking medium-resolution images in black and white and low-resolution images in color. The image extends from latitude 17 degrees S. to 25 degrees S. and from longitude 250 degrees to 260 degrees; Mercator projection. Tyrrhena Patera has a 12-km-diameter caldera at its center surrounded by a 45-km-diameter fracture ring. Around the fracture ring, the terrain is highly eroded forming ragged outward-facing cliffs, as though successive flat-lying layers had been eroded back. Cut into the sequence are several flat-floored channels that extend outward as far as 200 km from the center of the volcano. The structure may be composed of highly erodible ash layers and the channels may be fluvial, with the release of water being triggered by volcanic activity (Carr, 1981, The surface of Mars, Yale Univ. Press, New Haven, 232 p.). http://photojournal.jpl.nasa.gov/catalog/PIA00421
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
Development of a novel 2D color map for interactive segmentation of histological images.
Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D
2012-05-01
We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.
Image Transform Based on the Distribution of Representative Colors for Color Deficient
NASA Astrophysics Data System (ADS)
Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru
This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
Qualitative evaluations and comparisons of six night-vision colorization methods
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul
2013-05-01
Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF < SM < SM-JHM < LUT < JHM < HM. It is anticipated that this work will provide a benchmark for NV colorization and for quantitative evaluation using an objective metric such as objective evaluation index (OEI).
White-Light Optical Information Processing and Holography.
1983-05-03
Processing, White-Light Holography, Image Subtraction, Image Deblurring , Coherence Requirement, Apparent Transfer Function, Source Encoding, Signal...in this period, also demonstrated several color image processing capabilities. Among those are broadband color image deblurring and color image...Broadband Image Deblurring ..... ......... 6 2.5 Color Image Subtraction ............... 7 2.6 Rainbow Holographic Aberrations . . ..... 7 2.7
Distance preservation in color image transforms
NASA Astrophysics Data System (ADS)
Santini, Simone
1999-12-01
Most current image processing systems work on color images, and color is a precious perceptual clue for determining image similarity. Working with color images, however, is not the sam thing as working with images taking values in a 3D Euclidean space. Not only are color spaces bounded, but the characteristics of the observer endow the space with a 'perceptual' metric that in general does not correspond to the metric naturally inherited from R3. This paper studies the problem of filtering color images abstractly. It begins by determining the properties of the color sum and color product operations such that he desirable properties of orthonormal bases will be preserved. The paper then defines a general scheme, based on the action of the additive group on the color space, by which operations that satisfy the required properties can be defined.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Kruse, Fred A.
1984-01-01
Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.
Roca, Alberto I
2014-01-01
The 2013 BioVis Contest provided an opportunity to evaluate different paradigms for visualizing protein multiple sequence alignments. Such data sets are becoming extremely large and thus taxing current visualization paradigms. Sequence Logos represent consensus sequences but have limitations for protein alignments. As an alternative, ProfileGrids are a new protein sequence alignment visualization paradigm that represents an alignment as a color-coded matrix of the residue frequency occurring at every homologous position in the aligned protein family. The JProfileGrid software program was used to analyze the BioVis contest data sets to generate figures for comparison with the Sequence Logo reference images. The ProfileGrid representation allows for the clear and effective analysis of protein multiple sequence alignments. This includes both a general overview of the conservation and diversity sequence patterns as well as the interactive ability to query the details of the protein residue distributions in the alignment. The JProfileGrid software is free and available from http://www.ProfileGrid.org.
A CRISPR/molecular beacon hybrid system for live-cell genomic imaging.
Wu, Xiaotian; Mao, Shiqi; Yang, Yantao; Rushdi, Muaz N; Krueger, Christopher J; Chen, Antony K
2018-04-30
The clustered regularly interspersed short palindromic repeat (CRISPR) gene-editing system has been repurposed for live-cell genomic imaging, but existing approaches rely on fluorescent protein reporters, making sensitive and continuous imaging difficult. Here, we present a fluorophore-based live-cell genomic imaging system that consists of a nuclease-deactivated mutant of the Cas9 protein (dCas9), a molecular beacon (MB), and an engineered single-guide RNA (sgRNA) harboring a unique MB target sequence (sgRNA-MTS), termed CRISPR/MB. Specifically, dCas9 and sgRNA-MTS are first co-expressed to target a specific locus in cells, followed by delivery of MBs that can then hybridize to MTS to illuminate the target locus. We demonstrated the feasibility of this approach for quantifying genomic loci, for monitoring chromatin dynamics, and for dual-color imaging when using two orthogonal MB/MTS pairs. With flexibility in selecting different combinations of fluorophore/quencher pairs and MB/MTS sequences, our CRISPR/MB hybrid system could be a promising platform for investigating chromatin activities.
e-phenology: monitoring leaf phenology and tracking climate changes in the tropics
NASA Astrophysics Data System (ADS)
Morellato, Patrícia; Alberton, Bruna; Almeida, Jurandy; Alex, Jefersson; Mariano, Greice; Torres, Ricardo
2014-05-01
The e-phenology is a multidisciplinary project combining research in Computer Science and Phenology. Its goal is to attack theoretical and practical problems involving the use of new technologies for remote phenological observation aiming to detect local environmental changes. It is geared towards three objectives: (a) the use of new technologies of environmental monitoring based on remote phenology monitoring systems; (b) creation of a protocol for a Brazilian long term phenology monitoring program and for the integration across disciplines, advancing our knowledge of seasonal responses within tropics to climate change; and (c) provide models, methods and algorithms to support management, integration and analysis of data of remote phenology systems. The research team is composed by computer scientists and biology researchers in Phenology. Our first results include: Phenology towers - We set up the first phenology tower in our core cerrado-savanna 1 study site at Itirapina, São Paulo, Brazil. The tower received a complete climatic station and a digital camera. The digital camera is set up to take daily sequence of images (five images per hour, from 6:00 to 18:00 h). We set up similar phenology towers with climatic station and cameras in five more sites: cerrado-savanna 2 (Pé de Gigante, SP), cerrado grassland 3 (Itirapina, SP), rupestrian fields 4 ( Serra do Cipo, MG), seasonal forest 5 (Angatuba, SP) and Atlantic raiforest 6 (Santa Virginia, SP). Phenology database - We finished modeling and validation of a phenology database that stores ground phenology and near-remote phenology, and we are carrying out the implementation with data ingestion. Remote phenology and image processing - We performed the first analyses of the cerrado sites 1 to 4 phenology derived from digital images. Analysis were conducted by extracting color information (RGB Red, Green and Blue color channels) from selected parts of the image named regions of interest (ROI). using the green color channel. We analyzed a daily sequence of images (6:00 to 18:00 h). Our results are innovative and indicate the great variation in color change response for tropical trees. We validate the camera phenology with our on the ground direct observation in the core cerrado site 1. We are developing a Image processing software to authomatic process the digital images and to generate the time series for further analyses. New techniques and image features have been used to extract seasonal features from data and for data processing, such as machine learning and visual rhythms. Machine learning was successful applied to identify similar species within the image. Visual rhythms show up as a new analytic tool for phenological interpretation. Next research steps include the analyses of longer data series, correlation with local climatic data, analyses and comparison of patterns among different vegetation sites, prepare a compressive protocol for digital camera phenology and develop new technologies to access vegetation changes using digital cameras. Support: FAPESP-Micorsoft Research, CNPq, CAPES.
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-01-01
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-06-28
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.
An adaptive, object oriented strategy for base calling in DNA sequence analysis.
Giddings, M C; Brumley, R L; Haker, M; Smith, L M
1993-01-01
An algorithm has been developed for the determination of nucleotide sequence from data produced in fluorescence-based automated DNA sequencing instruments employing the four-color strategy. This algorithm takes advantage of object oriented programming techniques for modularity and extensibility. The algorithm is adaptive in that data sets from a wide variety of instruments and sequencing conditions can be used with good results. Confidence values are provided on the base calls as an estimate of accuracy. The algorithm iteratively employs confidence determinations from several different modules, each of which examines a different feature of the data for accurate peak identification. Modules within this system can be added or removed for increased performance or for application to a different task. In comparisons with commercial software, the algorithm performed well. Images PMID:8233787
NASA Captures First Color Image of Mercury from Orbit
2011-03-30
NASA image acquired: March 29, 2011 The first image acquired by MESSENGER from orbit around Mercury was actually part of an eight-image sequence, for which images were acquired through eight of the WAC’s eleven filters. Here we see a color version of that first imaged terrain; in this view the images obtained through the filters with central wavelengths of 1000 nm, 750 nm, and 430 nm are displayed in red, green, and blue, respectively. One of MESSENGER’s measurement objectives is to create an eight-color global base map at a resolution of 1 km/pixel (0.6 miles/pixel) to help understand the variations of composition across Mercury’s surface. On March 17, 2011 (March 18, 2011, UTC), MESSENGER became the first spacecraft ever to orbit the planet Mercury. The mission is currently in its commissioning phase, during which spacecraft and instrument performance are verified through a series of specially designed checkout activities. In the course of the one-year primary mission, the spacecraft's seven scientific instruments and radio science investigation will unravel the history and evolution of the Solar System's innermost planet. Visit the Why Mercury? section of this website to learn more about the science questions that the MESSENGER mission has set out to answer. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook
Enriching text with images and colored light
NASA Astrophysics Data System (ADS)
Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon
2008-01-01
We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.
Color transfer between high-dynamic-range images
NASA Astrophysics Data System (ADS)
Hristova, Hristina; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi
2015-09-01
Color transfer methods alter the look of a source image with regards to a reference image. So far, the proposed color transfer methods have been limited to low-dynamic-range (LDR) images. Unlike LDR images, which are display-dependent, high-dynamic-range (HDR) images contain real physical values of the world luminance and are able to capture high luminance variations and finest details of real world scenes. Therefore, there exists a strong discrepancy between the two types of images. In this paper, we bridge the gap between the color transfer domain and the HDR imagery by introducing HDR extensions to LDR color transfer methods. We tackle the main issues of applying a color transfer between two HDR images. First, to address the nature of light and color distributions in the context of HDR imagery, we carry out modifications of traditional color spaces. Furthermore, we ensure high precision in the quantization of the dynamic range for histogram computations. As image clustering (based on light and colors) proved to be an important aspect of color transfer, we analyze it and adapt it to the HDR domain. Our framework has been applied to several state-of-the-art color transfer methods. Qualitative experiments have shown that results obtained with the proposed adaptation approach exhibit less artifacts and are visually more pleasing than results obtained when straightforwardly applying existing color transfer methods to HDR images.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
NASA Astrophysics Data System (ADS)
Lecoeur, Jérémy; Ferré, Jean-Christophe; Collins, D. Louis; Morrisey, Sean P.; Barillot, Christian
2009-02-01
A new segmentation framework is presented taking advantage of multimodal image signature of the different brain tissues (healthy and/or pathological). This is achieved by merging three different modalities of gray-level MRI sequences into a single RGB-like MRI, hence creating a unique 3-dimensional signature for each tissue by utilising the complementary information of each MRI sequence. Using the scale-space spectral gradient operator, we can obtain a spatial gradient robust to intensity inhomogeneity. Even though it is based on psycho-visual color theory, it can be very efficiently applied to the RGB colored images. More over, it is not influenced by the channel assigment of each MRI. Its optimisation by the graph cuts paradigm provides a powerful and accurate tool to segment either healthy or pathological tissues in a short time (average time about ninety seconds for a brain-tissues classification). As it is a semi-automatic method, we run experiments to quantify the amount of seeds needed to perform a correct segmentation (dice similarity score above 0.85). Depending on the different sets of MRI sequences used, this amount of seeds (expressed as a relative number in pourcentage of the number of voxels of the ground truth) is between 6 to 16%. We tested this algorithm on brainweb for validation purpose (healthy tissue classification and MS lesions segmentation) and also on clinical data for tumours and MS lesions dectection and tissues classification.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image shows the wind eroded deposit in Pollack Crater called 'White Rock'. This image was collected during the Southern Fall Season. Image information: VIS instrument. Latitude -8, Longitude 25.2 East (334.8 West). 0 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.An interactive tool for gamut masking
NASA Astrophysics Data System (ADS)
Song, Ying; Lau, Cheryl; Süsstrunk, Sabine
2014-02-01
Artists often want to change the colors of an image to achieve a particular aesthetic goal. For example, they might limit colors to a warm or cool color scheme to create an image with a certain mood or feeling. Gamut masking is a technique that artists use to limit the set of colors they can paint with. They draw a mask over a color wheel and only use the hues within the mask. However, creating the color palette from the mask and applying the colors to the image requires skill. We propose an interactive tool for gamut masking that allows amateur artists to create an image with a desired mood or feeling. Our system extracts a 3D color gamut from the 2D user-drawn mask and maps the image to this gamut. The user can draw a different gamut mask or locally refine the image colors. Our voxel grid gamut representation allows us to represent gamuts of any shape, and our cluster-based image representation allows the user to change colors locally.
True color scanning laser ophthalmoscopy and optical coherence tomography handheld probe
LaRocca, Francesco; Nankivil, Derek; Farsiu, Sina; Izatt, Joseph A.
2014-01-01
Scanning laser ophthalmoscopes (SLOs) are able to achieve superior contrast and axial sectioning capability compared to fundus photography. However, SLOs typically use monochromatic illumination and are thus unable to extract color information of the retina. Previous color SLO imaging techniques utilized multiple lasers or narrow band sources for illumination, which allowed for multiple color but not “true color” imaging as done in fundus photography. We describe the first “true color” SLO, handheld color SLO, and combined color SLO integrated with a spectral domain optical coherence tomography (OCT) system. To achieve accurate color imaging, the SLO was calibrated with a color test target and utilized an achromatizing lens when imaging the retina to correct for the eye’s longitudinal chromatic aberration. Color SLO and OCT images from volunteers were then acquired simultaneously with a combined power under the ANSI limit. Images from this system were then compared with those from commercially available SLOs featuring multiple narrow-band color imaging. PMID:25401032
NASA Astrophysics Data System (ADS)
Lang, Jun
2015-03-01
In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.
Portable real-time color night vision
NASA Astrophysics Data System (ADS)
Toet, Alexander; Hogervorst, Maarten A.
2008-03-01
We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
High Speed Intensified Video Observations of TLEs in Support of PhOCAL
NASA Technical Reports Server (NTRS)
Lyons, Walter A.; Nelson, Thomas E.; Cummer, Steven A.; Lang, Timothy; Miller, Steven; Beavis, Nick; Yue, Jia; Samaras, Tim; Warner, Tom A.
2013-01-01
The third observing season of PhOCAL (Physical Origins of Coupling to the upper Atmosphere by Lightning) was conducted over the U.S. High Plains during the late spring and summer of 2013. The goal was to capture using an intensified high-speed camera, a transient luminous event (TLE), especially a sprite, as well as its parent cloud-to-ground (SP+CG) lightning discharge, preferably within the domain of a 3-D lightning mapping array (LMA). The co-capture of sprite and its SP+CG was achieved within useful range of an interferometer operating near Rapid City. Other high-speed sprite video sequences were captured above the West Texas LMA. On several occasions the large mesoscale convective complexes (MCSs) producing the TLE-class lightning were also generating vertically propagating convectively generated gravity waves (CGGWs) at the mesopause which were easily visible using NIR-sensitive color cameras. These were captured concurrent with sprites. These observations were follow-ons to a case on 15 April 2012 in which CGGWs were also imaged by the new Day/Night Band on the Suomi NPP satellite system. The relationship between the CGGW and sprite initiation are being investigated. The past year was notable for a large number of elve+halo+sprite sequences sequences generated by the same parent CG. And on several occasions there appear to be prominent banded modulations of the elves' luminosity imaged at >3000 ips. These stripes appear coincident with the banded CGGW structure, and presumably its density variations. Several elves and a sprite from negative CGs were also noted. New color imaging systems have been tested and found capable of capturing sprites. Two cases of sprites with an aurora as a backdrop were also recorded. High speed imaging was also provided in support of the UPLIGHTS program near Rapid City, SD and the USAFA SPRITES II airborne campaign over the Great Plains.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image continues the northward trend through the Iani Chaos region. Compare this image to Monday's and Tuesday's. This image was collected during the Southern Fall season. Image information: VIS instrument. Latitude -0.1 Longitude 342.6 East (17.4 West). 19 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image is located in a different part of Aureum Chaos. Compare the surface textures with yesterday's image. This image was collected during the Southern Fall season. Image information: VIS instrument. Latitude -4.1, Longitude 333.9 East (26.1 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Joint sparse coding based spatial pyramid matching for classification of color medical image.
Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin
2015-04-01
Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.
Example-Based Image Colorization Using Locality Consistent Sparse Representation.
Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L
2017-11-01
Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image of a portion of the Iani Chaos region was collected during the Southern Fall season. Image information: VIS instrument. Latitude -2.6 Longitude 342.4 East (17.6 West). 36 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 12 May 2004 This daytime visible color image was collected on June 6, 2003 during the Southern Spring season near the South Polar Cap Edge. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -77.8, Longitude 195 East (165 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-01-01
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2016-06-10
Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.
NASA Astrophysics Data System (ADS)
Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald
2014-03-01
High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.
2017-03-01
2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image of an old channel floor and surrounding highlands is located in the lower reach of Mawrth Valles. This image was collected during the Northern Spring season. Image information: VIS instrument. Latitude 25.7, Longitude 341.2 East (18.8 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
NASA Astrophysics Data System (ADS)
Jiang, Yan; Harrison, Tyler; Forbrich, Alex; Zemp, Roger J.
2011-03-01
The metabolic rate of oxygen consumption (MRO2) quantifies tissue metabolism, which is important for diagnosis of many diseases. For a single vessel model, the MRO2 can be estimated in terms of the mean flow velocity, vessel crosssectional area, total concentration of hemoglobin (CHB), and the difference between the oxygen saturation (sO2) of blood flowing into and out of the tissue region. In this work, we would like to show the feasibility to estimate MRO2 with our combined photoacoustic and high-frequency ultrasound imaging system. This system uses a swept-scan 25-MHz ultrasound transducer with confocal dark-field laser illumination optics. A pulse-sequencer enables ultrasonic and laser pulses to be interlaced so that photoacoustic and Doppler ultrasound images are co-registered. Since the mean flow velocity can be measured by color Doppler ultrasound, the vessel cross-sectional area can be measured by power Doppler or photoacoustic imaging, and multi-wavelength photoacoustic methods can be used to estimate sO2 and CHB, all of these parameters necessary for MRO2 estimation can be provided by our system. Experiments have been performed on flow phantoms to generate co-registered color Doppler and photoacoustic images. To verify the sO2 estimation, two ink samples (red and blue) were mixed in various concentration ratios to mimic different levels of sO2, and the result shows a good match between the calculated concentration ratios and actual values.
Color correction with blind image restoration based on multiple images using a low-rank model
NASA Astrophysics Data System (ADS)
Li, Dong; Xie, Xudong; Lam, Kin-Man
2014-03-01
We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thilker, David A.; Bianchi, Luciana; Schiminovich, David
We have discovered recent star formation in the outermost portion ((1-4) x R {sub 25}) of the nearby lenticular (S0) galaxy NGC 404 using Galaxy Evolution Explorer UV imaging. FUV-bright sources are strongly concentrated within the galaxy's H I ring (formed by a merger event according to del RIo et al.), even though the average gas density is dynamically subcritical. Archival Hubble Space Telescope imaging reveals resolved upper main-sequence stars and conclusively demonstrates that the UV light originates from recent star formation activity. We present FUV, NUV radial surface brightness profiles, and integrated magnitudes for NGC 404. Within the ring,more » the average star formation rate (SFR) surface density ({Sigma}{sub SFR}) is {approx}2.2 x 10{sup -5} M {sub sun} yr{sup -1} kpc{sup -2}. Of the total FUV flux, 70% comes from the H I ring which is forming stars at a rate of 2.5 x 10{sup -3} M {sub sun} yr{sup -1}. The gas consumption timescale, assuming a constant SFR and no gas recycling, is several times the age of the universe. In the context of the UV-optical galaxy color-magnitude diagram, the presence of the star-forming H I ring places NGC 404 in the green valley separating the red and blue sequences. The rejuvenated lenticular galaxy has experienced a merger-induced, disk-building excursion away from the red sequence toward bluer colors, where it may evolve quiescently or (if appropriately triggered) experience a burst capable of placing it on the blue/star-forming sequence for up to {approx}1 Gyr. The green valley galaxy population is heterogeneous, with most systems transitioning from blue to red but others evolving in the opposite sense due to acquisition of fresh gas through various channels.« less
The Mid-infrared View of Red Sequence Galaxies in Abell 2218 with AKARI
NASA Astrophysics Data System (ADS)
Ko, Jongwan; Im, Myungshin; Lee, Hyung Mok; Lee, Myung Gyoon; Hopwood, Ros H.; Serjeant, Stephen; Smail, Ian; Hwang, Ho Seong; Hwang, Narae; Shim, Hyunjin; Kim, Seong Jin; Lee, Jong Chul; Lim, Sungsoon; Seo, Hyunjong; Goto, Tomotsugu; Hanami, Hitoshi; Matsuhara, Hideo; Takagi, Toshinobu; Wada, Takehiko
2009-04-01
We present the AKARI Infrared Camera (IRC) imaging observation of early-type galaxies (ETGs) in A2218 at zsime 0.175. Mid-infrared (MIR) emission from ETGs traces circumstellar dust emission from asymptotic giant branch (AGB) stars or/and residual star formation. Including the unique imaging capability at 11 and 15 μm, our AKARI data provide an effective way to investigate MIR properties of ETGs in the cluster environment. Among our flux-limited sample of 22 red sequence ETGs with precise dynamical and line strength measurements (less than 18 mag at 3 μm), we find that at least 41% have MIR-excess emission. The N3 - S11 versus N3 (3 and 11 μm) color-magnitude relation shows the expected blue sequence, but the MIR-excess galaxies add a red wing to the relation especially at the fainter end. A spectral energy distribution analysis reveals that the dust emission from AGB stars is the most likely cause of the MIR excess, with a low level of star formation being the next possible explanation. The MIR-excess galaxies show a wide spread of N3 - S11 colors, implying a significant spread (2-11 Gyr) in the estimated mean ages of stellar populations. We study the environmental dependence of MIR-excess ETGs over an area out to a half virial radius (~1 Mpc). We find that the MIR-excess ETGs are preferentially located in the outer region. From this evidence, we suggest that the fainter, MIR-excess ETGs have just joined the red sequence, possibly due to the infall and subsequent morphological/spectral transformation induced by the cluster environment.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.
Park, Chulhee; Kang, Moon Gi
2016-05-18
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Park, Chulhee; Kang, Moon Gi
2016-01-01
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381
Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination
NASA Astrophysics Data System (ADS)
Zhong, Xin; Wang, Xinwei; Zhou, Yan
2018-01-01
A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong
2018-01-01
In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.
Optimal chroma-like channel design for passive color image splicing detection
NASA Astrophysics Data System (ADS)
Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin
2012-12-01
Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.
NASA Technical Reports Server (NTRS)
2007-01-01
This beautiful image of the crescents of volcanic Io and more sedate Europa is a combination of two New Horizons images taken March 2, 2007, about two days after New Horizons made its closest approach to Jupiter. A lower-resolution color image snapped by the Multispectral Visual Imaging Camera (MVIC) at 10:34 universal time (UT) has been merged with a higher-resolution black-and-white image taken by the Long Range Reconnaissance Imager (LORRI) at 10:23 UT. The composite image shows the relative positions of Io and Europa, which were moving past each other during the image sequence, as they were at the time the LORRI image was taken. This image was taken from a range of 4.6 million kilometers (2.8 million miles) from Io and 3.8 million kilometers (2.4 million miles) from Europa. Although the moons appear close together in this view, a gulf of 790,000 kilometers (490,000 miles) separates them. Io's night side is lit up by light reflected from Jupiter, which is off the frame to the right. Europa's night side is dark, in contrast to Io, because this side of Europa faces away from Jupiter. Here Io steals the show with its beautiful display of volcanic activity. Three volcanic plumes are visible. Most conspicuous is the enormous 300-kilometer (190-mile) high plume from the Tvashtar volcano at the 11 o'clock position on Io's disk. Two much smaller plumes are also visible: that from the volcano Prometheus, at the 9 o'clock position on the edge of Io's disk, and from the volcano Amirani, seen between Prometheus and Tvashtar along Io's terminator (the line dividing day and night). The Tvashtar plume appears blue because of the scattering of light by tiny dust particles ejected by the volcanoes, similar to the blue appearance of smoke. In addition, the contrasting red glow of hot lava can be seen at the source of the Tvashtar plume. The images are centered at 1 degree North, 60 degrees West on Io, and 0 degrees North, 149 degrees West on Europa. The color in this image was generated using individual MVIC images at wavelengths of 480, 620 and 850 nanometers. The human eye is sensitive to slightly shorter wavelengths, from 400 to 700 nanometers, and thus would see the scene slightly differently. For instance, while the eye would notice the difference between the yellow and reddish brown colors of Io's surface and the paler color of Europa, the two worlds appear very similar in color to MVIC's longer-wavelength vision. The night side of Io appears greenish compared to the day side, because methane in Jupiter's atmosphere absorbs 850 nanometer light and makes Jupiter-light green to MVIC's eyes.Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image was collected during Southern Fall and shows part of the Aureum Chaos. Image information: VIS instrument. Latitude -3.6, Longitude 332.9 East (27.1 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 13 May 2004 This nighttime visible color image was collected on November 26, 2002 during the Northern Summer season near the North Polar Cap Edge. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude 80, Longitude 43.2 East (316.8 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Astrophysics Data System (ADS)
Li, Chuang; Min, Fuhong; Jin, Qiusen; Ma, Hanyuan
2017-12-01
An active charge-controlled memristive Chua's circuit is implemented, and its basic properties are analyzed. Firstly, with the system trajectory starting from an equilibrium point, the dynamic behavior of multiple coexisting attractors depending on the memristor initial value and the system parameter is studied, which shows the coexisting behaviors of point, period, chaos, and quasic-period. Secondly, with the system motion starting from a non-equilibrium point, the dynamics of extreme multistability in a wide initial value domain are easily conformed by new analytical methods. Furthermore, the simulation results indicate that some strange chaotic attractors like multi-wing type and multi-scroll type are observed when the observed signals are extended from voltage and current to power and energy, respectively. Specially, when different initial conditions are taken, the coexisting strange chaotic attractors between the power and energy signals are exhibited. Finally, the chaotic sequences of the new system are used for encrypting color image to protect image information security. The encryption performance is analyzed by statistic histogram, correlation, key spaces and key sensitivity. Simulation results show that the new memristive chaotic system has high security in color image encryption.
Edson, D.; Colvocoresses, Alden P.
1973-01-01
Remote-sensor images, including aerial and space photographs, are generally recorded on film, where the differences in density create the image of the scene. With panchromatic and multiband systems the density differences are recorded in shades of gray. On color or color infrared film, with the emulsion containing dyes sensitive to different wavelengths, a color image is created by a combination of color densities. The colors, however, can be separated by filtering or other techniques, and the color image reduced to monochromatic images in which each of the separated bands is recorded as a function of the gray scale.
Brain MR image segmentation using NAMS in pseudo-color.
Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong
2017-12-01
Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex
2012-01-01
Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
Comparison of lossless compression techniques for prepress color images
NASA Astrophysics Data System (ADS)
Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.
1998-12-01
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk
2016-01-01
To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial phase of structure-based studies. In this respect, the web server can be a valuable tool, as users are allowed to process multiple structures, quickly switch between results, and interact with generated visualizations in an intuitive manner. The SequenceCEROSENE web server is available at https://biosciences.hs-mittweida.de/seqcerosene.
Curiosity's Mars Hand Lens Imager (MAHLI) Investigation
Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter
2012-01-01
The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.
A study of glasses-type color CGH using a color filter considering reduction of blurring
NASA Astrophysics Data System (ADS)
Iwami, Saki; Sakamoto, Yuji
2009-02-01
We have developed a glasses-type color computer generated hologram (CGH) by using a color filter. The proposed glasses consist of two "lenses" made of overlapping holograms and color filters. The holograms, which are calculated to reconstruct images in each primary color, are divided to small areas, which we called cells, and superimposed on one hologram. In the same way, colors of the filter correspond to the hologram cells. We can configure it very simply without a complex optical system, and the configuration yields a small and light weight system suitable for glasses. When the cell is small enough, the colors are mixed and reconstructed color images are observed. In addition, color expression of reconstruction images improves, too. However, using small cells blurrs reconstructed images because of the following reasons: (1) interference between cells because of the correlation with the cells, and (2) reduction of resolution caused by the size of the cell hologram. We are investigating in order to make a hologram that has high resolution reconstructed color images without ghost images. In this paper, we discuss (1) the details of the proposed glasses-type color CGH, (2) appropriate cell size for an eye system, (3) effects of cell shape on the reconstructed images, and (4) a new method to reduce the blurring of the images.
2014-01-01
Background Ambiscript is a graphically-designed nucleic acid notation that uses symbol symmetries to support sequence complementation, highlight biologically-relevant palindromes, and facilitate the analysis of consensus sequences. Although the original Ambiscript notation was designed to easily represent consensus sequences for multiple sequence alignments, the notation’s black-on-white ambiguity characters are unable to reflect the statistical distribution of nucleotides found at each position. We now propose a color-augmented ambigraphic notation to encode the frequency of positional polymorphisms in these consensus sequences. Results We have implemented this color-coding approach by creating an Adobe Flash® application ( http://www.ambiscript.org) that shades and colors modified Ambiscript characters according to the prevalence of the encoded nucleotide at each position in the alignment. The resulting graphic helps viewers perceive biologically-relevant patterns in multiple sequence alignments by uniquely combining color, shading, and character symmetries to highlight palindromes and inverted repeats in conserved DNA motifs. Conclusion Juxtaposing an intuitive color scheme over the deliberate character symmetries of an ambigraphic nucleic acid notation yields a highly-functional nucleic acid notation that maximizes information content and successfully embodies key principles of graphic excellence put forth by the statistician and graphic design theorist, Edward Tufte. PMID:24447494
NASA Technical Reports Server (NTRS)
Serebreny, S. M.; Evans, W. E.; Wiegman, E. J.
1974-01-01
The usefulness of dynamic display techniques in exploiting the repetitive nature of ERTS imagery was investigated. A specially designed Electronic Satellite Image Analysis Console (ESIAC) was developed and employed to process data for seven ERTS principal investigators studying dynamic hydrological conditions for diverse applications. These applications include measurement of snowfield extent and sediment plumes from estuary discharge, Playa Lake inventory, and monitoring of phreatophyte and other vegetation changes. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The most unique feature of the system is the capability to time lapse the imagery and analytic displays of the imagery. Data products included quantitative measurements of distances and areas, binary thematic maps based on monospectral or multispectral decisions, radiance profiles, and movie loops. Applications of animation for uses other than creating time-lapse sequences are identified. Input to the ESIAC can be either digital or via photographic transparencies.
Jobke, B.; Bolbos, R.; Saadat, E.; Cheng, J.; Li, X.; Majumdar, S.
2012-01-01
The application of biomolecular magnetic resonance imaging becomes increasingly important in the context of early cartilage changes in degenerative and inflammatory joint disease before gross morphological changes become apparent. In this limited technical report, we investigate the correlation of MRI T1, T2 and T1
A robust color image fusion for low light level and infrared images
NASA Astrophysics Data System (ADS)
Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang
2016-09-01
The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.
Luminance contours can gate afterimage colors and "real" colors.
Anstis, Stuart; Vergeer, Mark; Van Lier, Rob
2012-09-06
It has long been known that colored images may elicit afterimages in complementary colors. We have already shown (Van Lier, Vergeer, & Anstis, 2009) that one and the same adapting image may result in different afterimage colors, depending on the test contours presented after the colored image. The color of the afterimage depends on two adapting colors, those both inside and outside the test. Here, we further explore this phenomenon and show that the color-contour interactions shown for afterimage colors also occur for "real" colors. We argue that similar mechanisms apply for both types of stimulation.
NASA Astrophysics Data System (ADS)
El-Saba, A. M.; Alam, M. S.; Surpanani, A.
2006-05-01
Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.
Lightness modification of color image for protanopia and deuteranopia
NASA Astrophysics Data System (ADS)
Tanaka, Go; Suetake, Noriaki; Uchino, Eiji
2010-01-01
In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.
Quantifying nonhomogeneous colors in agricultural materials part I: method development.
Balaban, M O
2008-11-01
Measuring the color of food and agricultural materials using machine vision (MV) has advantages not available by other measurement methods such as subjective tests or use of color meters. The perception of consumers may be affected by the nonuniformity of colors. For relatively uniform colors, average color values similar to those given by color meters can be obtained by MV. For nonuniform colors, various image analysis methods (color blocks, contours, and "color change index"[CCI]) can be applied to images obtained by MV. The degree of nonuniformity can be quantified, depending on the level of detail desired. In this article, the development of the CCI concept is presented. For images with a wide range of hue values, the color blocks method quantifies well the nonhomogeneity of colors. For images with a narrow hue range, the CCI method is a better indicator of color nonhomogeneity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimpe, T; Marchessoux, C; Rostang, J
Purpose: Use of color images in medical imaging has increased significantly the last few years. As of today there is no agreed standard on how color information needs to be visualized on medical color displays, resulting into large variability of color appearance and it making consistency and quality assurance a challenge. This paper presents a proposal for an extension of DICOM GSDF towards color. Methods: Visualization needs for several color modalities (multimodality imaging, nuclear medicine, digital pathology, quantitative imaging applications…) have been studied. On this basis a proposal was made for desired color behavior of color medical display systems andmore » its behavior and effect on color medical images was analyzed. Results: Several medical color modalities could benefit from perceptually linear color visualization for similar reasons as why GSDF was put in place for greyscale medical images. An extension of the GSDF (Greyscale Standard Display Function) to color is proposed: CSDF (color standard display function). CSDF is based on deltaE2000 and offers a perceptually linear color behavior. CSDF uses GSDF as its neutral grey behavior. A comparison between sRGB/GSDF and CSDF confirms that CSDF significantly improves perceptual color linearity. Furthermore, results also indicate that because of the improved perceptual linearity, CSDF has the potential to increase perceived contrast of clinically relevant color features. Conclusion: There is a need for an extension of GSDF towards color visualization in order to guarantee consistency and quality. A first proposal (CSDF) for such extension has been made. Behavior of a CSDF calibrated display has been characterized and compared with sRGB/GSDF behavior. First results indicate that CSDF could have a positive influence on perceived contrast of clinically relevant color features and could offer benefits for quantitative imaging applications. Authors are employees of Barco Healthcare.« less
Color transfer method preserving perceived lightness
NASA Astrophysics Data System (ADS)
Ueda, Chiaki; Azetsu, Tadahiro; Suetake, Noriaki; Uchino, Eiji
2016-06-01
Color transfer originally proposed by Reinhard et al. is a method to change the color appearance of an input image by using the color information of a reference image. The purpose of this study is to modify color transfer so that it works well even when the scenes of the input and reference images are not similar. Concretely, a color transfer method with lightness correction and color gamut adjustment is proposed. The lightness correction is applied to preserve the perceived lightness which is explained by the Helmholtz-Kohlrausch (H-K) effect. This effect is the phenomenon that vivid colors are perceived as brighter than dull colors with the same lightness. Hence, when the chroma is changed by image processing, the perceived lightness is also changed even if the physical lightness is preserved after the image processing. In the proposed method, by considering the H-K effect, color transfer that preserves the perceived lightness after processing is realized. Furthermore, color gamut adjustment is introduced to address the color gamut problem, which is caused by color space conversion. The effectiveness of the proposed method is verified by performing some experiments.
The Constancy of Colored After-Images
Zeki, Semir; Cheadle, Samuel; Pepper, Joshua; Mylonas, Dimitris
2017-01-01
We undertook psychophysical experiments to determine whether the color of the after-image produced by viewing a colored patch which is part of a complex multi-colored scene depends on the wavelength-energy composition of the light reflected from that patch. Our results show that it does not. The after-image, just like the color itself, depends on the ratio of light of different wavebands reflected from it and its surrounds. Hence, traditional accounts of after-images as being the result of retinal adaptation or the perceptual result of physiological opponency, are inadequate. We propose instead that the color of after-images is generated after colors themselves are generated in the visual brain. PMID:28539878
Parallel human genome analysis: microarray-based expression monitoring of 1000 genes.
Schena, M; Shalon, D; Heller, R; Chai, A; Brown, P O; Davis, R W
1996-01-01
Microarrays containing 1046 human cDNAs of unknown sequence were printed on glass with high-speed robotics. These 1.0-cm2 DNA "chips" were used to quantitatively monitor differential expression of the cognate human genes using a highly sensitive two-color hybridization assay. Array elements that displayed differential expression patterns under given experimental conditions were characterized by sequencing. The identification of known and novel heat shock and phorbol ester-regulated genes in human T cells demonstrates the sensitivity of the assay. Parallel gene analysis with microarrays provides a rapid and efficient method for large-scale human gene discovery. Images Fig. 1 Fig. 2 Fig. 3 PMID:8855227
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Brédart, Serge; Cornet, Alyssa; Rakic, Jean-Marie
2014-01-01
Color deficient (dichromat) and normal observers' recognition memory for colored and black-and-white natural scenes was evaluated through several parameters: the rate of recognition, discrimination (A'), response bias (B"D), response confidence, and the proportion of conscious recollections (Remember responses) among hits. At the encoding phase, 36 images of natural scenes were each presented for 1 sec. Half of the images were shown in color and half in black-and-white. At the recognition phase, these 36 pictures were intermixed with 36 new images. The participants' task was to indicate whether an image had been presented or not at the encoding phase, to rate their level of confidence in his her/his response, and in the case of a positive response, to classify the response as a Remember, a Know or a Guess response. Results indicated that accuracy, response discrimination, response bias and confidence ratings were higher for colored than for black-and-white images; this advantage for colored images was similar in both groups of participants. Rates of Remember responses were not higher for colored images than for black-and-white ones, whatever the group. However, interestingly, Remember responses were significantly more often based on color information for colored than for black-and-white images in normal observers only, not in dichromats.
Enceladus Stetting Behind Saturn (Image & Movie)
2017-09-15
Saturn's active, ocean-bearing moon Enceladus sinks behind the giant planet in a farewell portrait from NASA's Cassini spacecraft. This view of Enceladus was taken by NASA's Cassini spacecraft on Sept. 13, 2017. It is among the last images Cassini sent back. The view is part of a movie sequence of images taken over a period of 40 minutes as the icy moon passed behind Saturn from the spacecraft's point of view. Images taken using red, green and blue spectral filters were assembled to create the natural color view. (A monochrome version of the image, taken using a clear spectral filter, is also available.) The images were taken using Cassini's narrow-angle camera at a distance of 810,000 million miles (1.3 million kilometers) from Enceladus and about 620,000 miles (1 million kilometers) from Saturn. Image scale on Enceladus is 5 miles (8 kilometers) per pixel. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21889
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 28 May 2004 This image was collected February 29, 2004 during the end of southern summer season. The local time at the location of the image was about 2 pm. The image shows an area in the South Polar region. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -84.7, Longitude 9.3 East (350.7 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.A novel false color mapping model-based fusion method of visual and infrared images
NASA Astrophysics Data System (ADS)
Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu
2013-12-01
A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.
Visual wetness perception based on image color statistics.
Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya
2017-05-01
Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.
Color Sparse Representations for Image Processing: Review, Models, and Prospects.
Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I
2015-11-01
Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.
Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images
NASA Astrophysics Data System (ADS)
Kruschwitz, Jennifer D. T.
Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
The Artist, the Color Copier, and Digital Imaging.
ERIC Educational Resources Information Center
Witte, Mary Stieglitz
The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…
Chemistry of the Konica Dry Color System
NASA Astrophysics Data System (ADS)
Suda, Yoshihiko; Ohbayashi, Keiji; Onodera, Kaoru
1991-08-01
While silver halide photosensitive materials offer superiority in image quality -- both in color and black-and-white -- they require chemical solutions for processing, and this can be a drawback. To overcome this, researchers turned to the thermal development of silver halide photographic materials, and met their first success with black-and-white images. Later, with the development of the Konica Dry Color System, color images were finally obtained from a completely dry thermal development system, without the use of water or chemical solutions. The dry color system is characterized by a novel chromogenic color image-forming technology and comprises four processes. (1) With the application of heat, a color developer precursor (CDP) decomposes to generate a p-phenylenediamine color developer (CD). (2) The CD then develops silver salts. (3) Oxidized CD then reacts with couplers to generate color image dyes. (4) Finally, the dyes diffuse from the system's photosensitive sheet to its image-receiving sheet. The authors have analyzed the kinetics of each of the system's four processes. In this paper, they report the kinetics of the system's first process, color developer (CD) generation.
Image domain propeller fast spin echo☆
Skare, Stefan; Holdsworth, Samantha J.; Lilja, Anders; Bammer, Roland
2013-01-01
A new pulse sequence for high-resolution T2-weighted (T2-w) imaging is proposed –image domain propeller fast spin echo (iProp-FSE). Similar to the T2-w PROPELLER sequence, iProp-FSE acquires data in a segmented fashion, as blades that are acquired in multiple TRs. However, the iProp-FSE blades are formed in the image domain instead of in the k-space domain. Each iProp-FSE blade resembles a single-shot fast spin echo (SSFSE) sequence with a very narrow phase-encoding field of view (FOV), after which N rotated blade replicas yield the final full circular FOV. Our method of combining the image domain blade data to a full FOV image is detailed, and optimal choices of phase-encoding FOVs and receiver bandwidths were evaluated on phantom and volunteers. The results suggest that a phase FOV of 15–20%, a receiver bandwidth of ±32–63 kHz and a subsequent readout time of about 300 ms provide a good tradeoff between signal-to-noise ratio (SNR) efficiency and T2 blurring. Comparisons between iProp-FSE, Cartesian FSE and PROPELLER were made on single-slice axial brain data, showing similar T2-w tissue contrast and SNR with great anatomical conspicuity at similar scan times –without colored noise or streaks from motion. A new slice interleaving order is also proposed to improve the multislice capabilities of iProp-FSE. PMID:23200683
Image domain propeller fast spin echo.
Skare, Stefan; Holdsworth, Samantha J; Lilja, Anders; Bammer, Roland
2013-04-01
A new pulse sequence for high-resolution T2-weighted (T2-w) imaging is proposed - image domain propeller fast spin echo (iProp-FSE). Similar to the T2-w PROPELLER sequence, iProp-FSE acquires data in a segmented fashion, as blades that are acquired in multiple TRs. However, the iProp-FSE blades are formed in the image domain instead of in the k-space domain. Each iProp-FSE blade resembles a single-shot fast spin echo (SSFSE) sequence with a very narrow phase-encoding field of view (FOV), after which N rotated blade replicas yield the final full circular FOV. Our method of combining the image domain blade data to a full FOV image is detailed, and optimal choices of phase-encoding FOVs and receiver bandwidths were evaluated on phantom and volunteers. The results suggest that a phase FOV of 15-20%, a receiver bandwidth of ±32-63 kHz and a subsequent readout time of about 300 ms provide a good tradeoff between signal-to-noise ratio (SNR) efficiency and T2 blurring. Comparisons between iProp-FSE, Cartesian FSE and PROPELLER were made on single-slice axial brain data, showing similar T2-w tissue contrast and SNR with great anatomical conspicuity at similar scan times - without colored noise or streaks from motion. A new slice interleaving order is also proposed to improve the multislice capabilities of iProp-FSE. Copyright © 2013 Elsevier Inc. All rights reserved.
THE RED-SEQUENCE CLUSTER SURVEY-2 (RCS-2): SURVEY DETAILS AND PHOTOMETRIC CATALOG CONSTRUCTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbank, David G.; Gladders, M. D.; Yee, H. K. C.
2011-03-15
The second Red-sequence Cluster Survey (RCS-2) is a {approx}1000 deg{sup 2}, multi-color imaging survey using the square-degree imager, MegaCam, on the Canada-France-Hawaii Telescope. It is designed to detect clusters of galaxies over the redshift range 0.1 {approx}< z {approx}< 1. The primary aim is to build a statistically complete, large ({approx}10{sup 4}) sample of clusters, covering a sufficiently long redshift baseline to be able to place constraints on cosmological parameters via the evolution of the cluster mass function. Other main science goals include building a large sample of high surface brightness, strongly gravitationally lensed arcs associated with these clusters, andmore » an unprecedented sample of several tens of thousands of galaxy clusters and groups, spanning a large range of halo mass, with which to study the properties and evolution of their member galaxies. This paper describes the design of the survey and the methodology for acquiring, reducing, and calibrating the data for the production of high-precision photometric catalogs. We describe the method for calibrating our griz imaging data using the colors of the stellar locus and overlapping Two Micron All Sky Survey photometry. This yields an absolute accuracy of <0.03 mag on any color and {approx}0.05 mag in the r-band magnitude, verified with respect to the Sloan Digital Sky Survey (SDSS). Our astrometric calibration is accurate to <<0.''3 from comparison with SDSS positions. RCS-2 reaches average 5{sigma} point-source limiting magnitudes of griz = [24.4, 24.3, 23.7, 22.8], approximately 1-2 mag deeper than the SDSS. Due to the queue-scheduled nature of the observations, the data are highly uniform and taken in excellent seeing, mostly FWHM {approx}< 0.''7 in the r band. In addition to the main science goals just described, these data form the basis for a number of other planned and ongoing projects (including the WiggleZ survey), making RCS-2 an important next-generation imaging survey.« less
Chromatic Modulator for High Resolution CCD or APS Devices
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)
2003-01-01
A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.
Keene, Douglas R
2015-04-01
"Color blindness" is a variable trait, including individuals with just slight color vision deficiency to those rare individuals with a complete lack of color perception. Approximately 75% of those with color impairment are green diminished; most of those remaining are red diminished. Red-Green color impairment is sex linked with the vast majority being male. The deficiency results in reds and greens being perceived as shades of yellow; therefore red-green images presented to the public will not illustrate regions of distinction to these individuals. Tools are available to authors wishing to accommodate those with color vision deficiency; most notable are components in FIJI (an extension of ImageJ) and Adobe Photoshop. Using these tools, hues of magenta may be substituted for red in red-green images resulting in striking definition for both the color sighted and color impaired. Web-based tools may be used (importantly) by color challenged individuals to convert red-green images archived in web-accessible journal articles into two-color images, which they may then discern.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
2014-01-01
Background The 2013 BioVis Contest provided an opportunity to evaluate different paradigms for visualizing protein multiple sequence alignments. Such data sets are becoming extremely large and thus taxing current visualization paradigms. Sequence Logos represent consensus sequences but have limitations for protein alignments. As an alternative, ProfileGrids are a new protein sequence alignment visualization paradigm that represents an alignment as a color-coded matrix of the residue frequency occurring at every homologous position in the aligned protein family. Results The JProfileGrid software program was used to analyze the BioVis contest data sets to generate figures for comparison with the Sequence Logo reference images. Conclusions The ProfileGrid representation allows for the clear and effective analysis of protein multiple sequence alignments. This includes both a general overview of the conservation and diversity sequence patterns as well as the interactive ability to query the details of the protein residue distributions in the alignment. The JProfileGrid software is free and available from http://www.ProfileGrid.org. PMID:25237393
Graphics-Printing Program For The HP Paintjet Printer
NASA Technical Reports Server (NTRS)
Atkins, Victor R.
1993-01-01
IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.
Earth and Moon as viewed from Mars
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-368, 22 May 2003
[figure removed for brevity, see original site] Globe diagram illustrates the Earth's orientation as viewed from Mars (North and South America were in view). Earth/Moon: This is the first image of Earth ever taken from another planet that actually shows our home as a planetary disk. Because Earth and the Moon are closer to the Sun than Mars, they exhibit phases, just as the Moon, Venus, and Mercury do when viewed from Earth. As seen from Mars by MGS on 8 May 2003 at 13:00 GMT (6:00 AM PDT), Earth and the Moon appeared in the evening sky. The MOC Earth/Moon image has been specially processed to allow both Earth (with an apparent magnitude of -2.5) and the much darker Moon (with an apparent magnitude of +0.9) to be visible together. The bright area at the top of the image of Earth is cloud cover over central and eastern North America. Below that, a darker area includes Central America and the Gulf of Mexico. The bright feature near the center-right of the crescent Earth consists of clouds over northern South America. The image also shows the Earth-facing hemisphere of the Moon, since the Moon was on the far side of Earth as viewed from Mars. The slightly lighter tone of the lower portion of the image of the Moon results from the large and conspicuous ray system associated with the crater Tycho.A note about the coloring process: The MGS MOC high resolution camera only takes grayscale (black-and-white) images. To 'colorize' the image, a Mariner 10 Earth/Moon image taken in 1973 was used to color the MOC Earth and Moon picture. The procedure used was as follows: the Mariner 10 image was converted from 24-bit color to 8-bit color using a JPEG to GIF conversion program. The 8-bit color image was converted to 8-bit grayscale and an associated lookup table mapping each gray value of the image to a red-green-blue color triplet (RGB). Each color triplet was root-sum-squared (RSS), and sorted in increasing RSS value. These sorted lists were brightness-to-color maps for the images. Each brightness-to-color map was then used to convert the 8-bit grayscale MOC image to an 8-bit color image. This 8-bit color image was then converted to a 24-bit color image. The color image was edited to return the background to black.A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
FIDUCIAL STELLAR POPULATION SEQUENCES FOR THE VJK{sub S} PHOTOMETRIC SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brasseur, Crystal M.; VandenBerg, Don A.; Stetson, Peter B.
2010-12-15
We have obtained broadband near-infrared photometry for seven Galactic star clusters (M 92, M 15, M 13, M 5, NGC 1851, M 71, and NGC 6791) using the WIRCam wide-field imager on the Canada-France-Hawaii Telescope, supplemented by images of NGC 1851 taken with HAWK-I on the Very Large Telescope. In addition, Two Micron All Sky Survey (2MASS) observations of the [Fe/H] {approx}0.0 open cluster M 67 were added to the cluster database. From the resultant (V - J) - V and (V - K{sub S} ) - V color-magnitude diagrams (CMDs), fiducial sequences spanning the range in metallicity, -2.4 {approx}
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
VARiD: a variation detection framework for color-space and letter-space platforms.
Dalca, Adrian V; Rumble, Stephen M; Levy, Samuel; Brudno, Michael
2010-06-15
High-throughput sequencing (HTS) technologies are transforming the study of genomic variation. The various HTS technologies have different sequencing biases and error rates, and while most HTS technologies sequence the residues of the genome directly, generating base calls for each position, the Applied Biosystem's SOLiD platform generates dibase-coded (color space) sequences. While combining data from the various platforms should increase the accuracy of variation detection, to date there are only a few tools that can identify variants from color space data, and none that can analyze color space and regular (letter space) data together. We present VARiD--a probabilistic method for variation detection from both letter- and color-space reads simultaneously. VARiD is based on a hidden Markov model and uses the forward-backward algorithm to accurately identify heterozygous, homozygous and tri-allelic SNPs, as well as micro-indels. Our analysis shows that VARiD performs better than the AB SOLiD toolset at detecting variants from color-space data alone, and improves the calls dramatically when letter- and color-space reads are combined. The toolset is freely available at http://compbio.cs.utoronto.ca/varid.
Stand-off detection of explosive particles by imaging Raman spectroscopy
NASA Astrophysics Data System (ADS)
Nordberg, Markus; Åkeson, Madeleine; Östmark, Henric; Carlsson, Torgny E.
2011-06-01
A multispectral imaging technique has been developed to detect and identify explosive particles, e.g. from a fingerprint, at stand-off distances using Raman spectroscopy. When handling IED's as well as other explosive devices, residues can easily be transferred via fingerprints onto other surfaces e.g. car handles, gear sticks and suite cases. By imaging the surface using multispectral imaging Raman technique the explosive particles can be identified and displayed using color-coding. The technique has been demonstrated by detecting fingerprints containing significant amounts of 2,4-dinitrotoulene (DNT), 2,4,6-trinitrotoulene (TNT) and ammonium nitrate at a distance of 12 m in less than 90 seconds (22 images × 4 seconds)1. For each measurement, a sequence of images, one image for each wave number, is recorded. The spectral data from each pixel is compared with reference spectra of the substances to be detected. The pixels are marked with different colors corresponding to the detected substances in the fingerprint. The system has now been further developed to become less complex and thereby less sensitive to the environment such as temperature fluctuations. The optical resolution has been improved to less than 70 μm measured at 546 nm wavelength. The total detection time is ranging from less then one minute to around five minutes depending on the size of the particles and how confident the identification should be. The results indicate a great potential for multi-spectral imaging Raman spectroscopy as a stand-off technique for detection of single explosive particles.
Image Analysis of DNA Fiber and Nucleus in Plants.
Ohmido, Nobuko; Wako, Toshiyuki; Kato, Seiji; Fukui, Kiichi
2016-01-01
Advances in cytology have led to the application of a wide range of visualization methods in plant genome studies. Image analysis methods are indispensable tools where morphology, density, and color play important roles in the biological systems. Visualization and image analysis methods are useful techniques in the analyses of the detailed structure and function of extended DNA fibers (EDFs) and interphase nuclei. The EDF is the highest in the spatial resolving power to reveal genome structure and it can be used for physical mapping, especially for closely located genes and tandemly repeated sequences. One the other hand, analyzing nuclear DNA and proteins would reveal nuclear structure and functions. In this chapter, we describe the image analysis protocol for quantitatively analyzing different types of plant genome, EDFs and interphase nuclei.
2015-10-08
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the floor of Melas Chasma. The dark blue region in this false color image is sand dunes. Orbit Number: 12061 Latitude: -12.2215 Longitude: 289.105 Instrument: VIS Captured: 2004-09-02 10:11 http://photojournal.jpl.nasa.gov/catalog/PIA19793
Calibration Image of Earth by Mars Color Imager
NASA Technical Reports Server (NTRS)
2005-01-01
Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils. The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results. The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles). This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data from the camera were being transmitted in real time to the Deep Space Network antennas in Goldstone, California.New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
Quantifying the effect of colorization enhancement on mammogram images
NASA Astrophysics Data System (ADS)
Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia
2002-04-01
Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).
True color blood flow imaging using a high-speed laser photography system
NASA Astrophysics Data System (ADS)
Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi
2012-10-01
Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.
Lo, T Y; Sim, K S; Tso, C P; Nia, M E
2014-01-01
An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.
Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat
2015-06-01
Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects
NASA Astrophysics Data System (ADS)
Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji
2017-03-01
High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.
A natural-color mapping for single-band night-time image based on FPGA
NASA Astrophysics Data System (ADS)
Wang, Yilun; Qian, Yunsheng
2018-01-01
A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.
Iakovidis, Dimitris K; Koulaouzidis, Anastasios
2014-11-01
The advent of wireless capsule endoscopy (WCE) has revolutionized the diagnostic approach to small-bowel disease. However, the task of reviewing WCE video sequences is laborious and time-consuming; software tools offering automated video analysis would enable a timelier and potentially a more accurate diagnosis. To assess the validity of innovative, automatic lesion-detection software in WCE. A color feature-based pattern recognition methodology was devised and applied to the aforementioned image group. This study was performed at the Royal Infirmary of Edinburgh, United Kingdom, and the Technological Educational Institute of Central Greece, Lamia, Greece. A total of 137 deidentified WCE single images, 77 showing pathology and 60 normal images. The proposed methodology, unlike state-of-the-art approaches, is capable of detecting several different types of lesions. The average performance, in terms of the area under the receiver-operating characteristic curve, reached 89.2 ± 0.9%. The best average performance was obtained for angiectasias (97.5 ± 2.4%) and nodular lymphangiectasias (96.3 ± 3.6%). Single expert for annotation of pathologies, single type of WCE model, use of single images instead of entire WCE videos. A simple, yet effective, approach allowing automatic detection of all types of abnormalities in capsule endoscopy is presented. Based on color pattern recognition, it outperforms previous state-of-the-art approaches. Moreover, it is robust in the presence of luminal contents and is capable of detecting even very small lesions. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.
Astronomy with the color blind
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Melrose, Justyn
2014-12-01
The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the field, although one should be cautious in assuming that such an image shows what the subject would "really look like" if a person could see it without the aid of a telescope. The details of how the eye processes light have a significant impact on how such images should be understood, and the step from perception to interpretation is even more problematic when the viewer is color blind. We report here on an approach to manipulating stacked tricolor images that, while abandoning attempts to portray the color distribution "realistically," do result in enabling those suffering from deuteranomaly (the most common form of color blindness) to perceive color distinctions they would otherwise not be able to see.
Accurate color synthesis of three-dimensional objects in an image
NASA Astrophysics Data System (ADS)
Xin, John H.; Shen, Hui-Liang
2004-05-01
Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit
2015-01-01
Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571
Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit
2015-01-01
Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.
Color model comparative analysis for breast cancer diagnosis using H and E stained images
NASA Astrophysics Data System (ADS)
Li, Xingyu; Plataniotis, Konstantinos N.
2015-03-01
Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.
Hepatitis Diagnosis Using Facial Color Image
NASA Astrophysics Data System (ADS)
Liu, Mingjia; Guo, Zhenhua
Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.
2013-01-01
Background Genetic variation at the melanocortin-1 receptor (MC1R) gene is correlated with melanin color variation in many birds. Feral pigeons (Columba livia) show two major melanin-based colorations: a red coloration due to pheomelanic pigment and a black coloration due to eumelanic pigment. Furthermore, within each color type, feral pigeons display continuous variation in the amount of melanin pigment present in the feathers, with individuals varying from pure white to a full dark melanic color. Coloration is highly heritable and it has been suggested that it is under natural or sexual selection, or both. Our objective was to investigate whether MC1R allelic variants are associated with plumage color in feral pigeons. Findings We sequenced 888 bp of the coding sequence of MC1R among pigeons varying both in the type, eumelanin or pheomelanin, and the amount of melanin in their feathers. We detected 10 non-synonymous substitutions and 2 synonymous substitution but none of them were associated with a plumage type. It remains possible that non-synonymous substitutions that influence coloration are present in the short MC1R fragment that we did not sequence but this seems unlikely because we analyzed the entire functionally important region of the gene. Conclusions Our results show that color differences among feral pigeons are probably not attributable to amino acid variation at the MC1R locus. Therefore, variation in regulatory regions of MC1R or variation in other genes may be responsible for the color polymorphism of feral pigeons. PMID:23915680
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
Color engineering in the age of digital convergence
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay W.
1998-09-01
Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †
Kiku, Daisuke; Okutomi, Masatoshi
2017-01-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
Spatial imaging in color and HDR: prometheus unchained
NASA Astrophysics Data System (ADS)
McCann, John J.
2013-03-01
The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.
Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji
2015-05-01
Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
High-chroma visual cryptography using interference color of high-order retarder films
NASA Astrophysics Data System (ADS)
Sugawara, Shiori; Harada, Kenji; Sakai, Daisuke
2015-08-01
Visual cryptography can be used as a method of sharing a secret image through several encrypted images. Conventional visual cryptography can display only monochrome images. We have developed a high-chroma color visual encryption technique using the interference color of high-order retarder films. The encrypted films are composed of a polarizing film and retarder films. The retarder films exhibit interference color when they are sandwiched between two polarizing films. We propose a stacking technique for displaying high-chroma interference color images. A prototype visual cryptography device using high-chroma interference color is developed.
Fukuda, Hiroyuki; Numata, Kazushi; Nozaki, Akito; Kondo, Masaaki; Morimoto, Manabu; Maeda, Shin; Tanaka, Katsuaki; Ohto, Masao; Ito, Ryu; Ishibashi, Yoshiharu; Oshima, Noriyoshi; Ito, Ayao; Zhu, Hui; Wang, Zhi-Biao
2013-12-01
We evaluated the usefulness of color Doppler flow imaging to compensate for the inadequate resolution of the ultrasound (US) monitoring during high-intensity focused ultrasound (HIFU) for the treatment of hepatocellular carcinoma (HCC). US-guided HIFU ablation assisted using color Doppler flow imaging was performed in 11 patients with small HCC (<3 lesions, <3 cm in diameter). The HIFU system (Chongqing Haifu Tech) was used under US guidance. Color Doppler sonographic studies were performed using an HIFU 6150S US imaging unit system and a 2.7-MHz electronic convex probe. The color Doppler images were used because of the influence of multi-reflections and the emergence of hyperecho. In 1 of the 11 patients, multi-reflections were responsible for the poor visualization of the tumor. In 10 cases, the tumor was poorly visualized because of the emergence of a hyperecho. In these cases, the ability to identify the original tumor location on the monitor by referencing the color Doppler images of the portal vein and the hepatic vein was very useful. HIFU treatments were successfully performed in all 11 patients with the assistance of color Doppler imaging. Color Doppler imaging is useful for the treatment of HCC using HIFU, compensating for the occasionally poor visualization provided by B-mode conventional US imaging.
Calibration Image of Earth by Mars Color Imager
2005-08-22
Three days after the Mars Reconnaissance Orbiter Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon.
2017-02-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Gale Crater. Basaltic sands are dark blue in this type of false color combination. The Curiosity Rover is located in another portion of Gale Crater, far southwest of this image. Orbit Number: 51803 Latitude: -4.39948 Longitude: 138.116 Instrument: VIS Captured: 2013-08-18 09:04 http://photojournal.jpl.nasa.gov/catalog/PIA21312
Adaptive enhancement for nonuniform illumination images via nonlinear mapping
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Huang, Qian; Hu, Jing
2017-09-01
Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.
Object knowledge changes visual appearance: semantic effects on color afterimages.
Lupyan, Gary
2015-10-01
According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.
Research on inosculation between master of ceremonies or players and virtual scene in virtual studio
NASA Astrophysics Data System (ADS)
Li, Zili; Zhu, Guangxi; Zhu, Yaoting
2003-04-01
A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.
A Definitive Optical Detection of a Supercluster at Z ~ 0.91
NASA Astrophysics Data System (ADS)
Lubin, Lori M.; Brunner, Robert; Metzger, Mark R.; Postman, Marc; Oke, J. B.
2000-03-01
We present the results from a multiband optical imaging program that has definitively confirmed the existence of a supercluster at z~0.91. Two massive clusters of galaxies, Cl 1604+4304 at z=0.897 and Cl 1604+4321 at z=0.924, were originally observed in the high-redshift cluster survey of Oke, Postman, & Lubin. They are separated by 4300 km s-1 in radial velocity and 17' on the plane of the sky. Their physical and redshift proximity suggested a promising supercluster candidate. Deep BRi imaging of the region between the two clusters indicates a large population of red galaxies. This population forms a tight, red sequence in the color-magnitude diagram at (R-i)~1.4. The characteristic color is identical to that of the spectroscopically confirmed early-type galaxies in the two member clusters. The red galaxies are spread throughout the 5 h-1 Mpc region between Cl 1604+4304 and Cl 1604+4321. Their spatial distribution delineates the entire large-scale structure with high concentrations at the cluster centers. In addition, we detect a significant overdensity of red galaxies directly between Cl 1604+4304 and Cl 1604+4321 which is the signature of a third, rich cluster associated with this system. The strong sequence of red galaxies and their spatial distribution clearly indicate that we have discovered a supercluster at z~0.91.
78 FR 18611 - Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-27
...] Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments AGENCY: Food and...: The Food and Drug Administration (FDA) and cosponsor International Color Consortium (ICC) are announcing the following public workshop entitled ``Summit on Color in Medical Imaging: An International...
The Role of Color and Morphologic Characteristics in Dermoscopic Diagnosis.
Bajaj, Shirin; Marchetti, Michael A; Navarrete-Dechent, Cristian; Dusza, Stephen W; Kose, Kivanc; Marghoob, Ashfaq A
2016-06-01
Both colors and structures are considered important in the dermoscopic evaluation of skin lesions but their relative significance is unknown. To determine if diagnostic accuracy for common skin lesions differs between gray-scale and color dermoscopic images. A convenience sample of 40 skin lesions (8 nevi, 8 seborrheic keratoses, 7 basal cell carcinomas, 7 melanomas, 4 hemangiomas, 4 dermatofibromas, 2 squamous cell carcinomas [SCCs]) was selected and shown to attendees of a dermoscopy course (2014 Memorial Sloan Kettering Cancer Center dermoscopy course). Twenty lesions were shown only once, either in gray-scale (n = 10) or color (n = 10) (nonpaired). Twenty lesions were shown twice, once in gray-scale (n = 20) and once in color (n = 20) (paired). Participants provided their diagnosis and confidence level for each of the 60 images. Of the 261 attendees, 158 participated (60.5%) in the study. Most were attending physicians (n = 76 [48.1%]). Most participants were practicing or training in dermatology (n = 144 [91.1%]). The median (interquartile range) experience evaluating skin lesions and using dermoscopy of participants was 6 (13.5) and 2 (4.0) years, respectively. Diagnostic accuracy and confidence level of participants evaluating gray-scale and color images. Two separate analyses were performed: (1) an unpaired evaluation comparing gray-scale and color images shown either once or for the first time, and (2) a paired evaluation comparing pairs of gray-scale and color images of the same lesion. In univariate analysis of unpaired images, color images were less likely to be diagnosed correctly compared with gray-scale images (odds ratio [OR], 0.8; P < .001). Using gray-scale images as the reference, multivariate analyses of both unpaired and paired images found no association between correct lesion diagnosis and use of color images (OR, 1.0; P = .99, and OR, 1.2; P = .82, respectively). Stratified analysis of paired images using a color by diagnosis interaction term showed that participants were more likely to make a correct diagnosis of SCC and hemangioma in color (P < .001 for both comparisons) and dermatofibroma in gray-scale (P < .001). Morphologic characteristics (ie, structures and patterns), not color, provide the primary diagnostic clue in dermoscopy. Use of gray-scale images may improve teaching of dermoscopy to novices by emphasizing the evaluation of morphology.
BLUE STRAGGLERS IN GLOBULAR CLUSTER 47 TUCANAE
NASA Technical Reports Server (NTRS)
2002-01-01
The core of globular cluster 47 Tucanae is home to many blue stragglers, rejuvenated stars that glow with the blue light of young stars. A ground-based telescope image (on the left) shows the entire crowded core of 47 Tucanae, located 15,000 light-years away in the constellation Tucana. Peering into the heart of the globular cluster's bright core, the Hubble Space Telescope's Wide Field and Planetary Camera 2 separated the dense clump of stars into many individual stars (image on right). Some of these stars shine with the light of old stars; others with the blue light of blue stragglers. The yellow circles in the Hubble telescope image highlight several of the cluster's blue stragglers. Analysis for this observation centered on one massive blue straggler. Astronomers theorize that blue stragglers are formed either by the slow merger of stars in a double-star system or by the collision of two unrelated stars. For the blue straggler in 47 Tucanae, astronomers favor the slow merger scenario. This image is a 3-color composite of archival Hubble Wide Field and Planetary Camera 2 images in the ultraviolet (blue), blue (green), and violet (red) filters. Color tables were assigned and scaled so that the red giant stars appear orange, main-sequence stars are white/green, and blue stragglers are appropriately blue. The ultraviolet images were taken on Oct. 25, 1995, and the blue and violet images were taken on Sept. 1, 1995. Credit: Rex Saffer (Villanova University) and Dave Zurek (STScI), and NASA
Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei
This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.
What's color got to do with it? The influence of color on visual attention in different categories.
Frey, Hans-Peter; Honey, Christian; König, Peter
2008-10-23
Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.
Effective method for detecting regions of given colors and the features of the region surfaces
NASA Astrophysics Data System (ADS)
Gong, Yihong; Zhang, HongJiang
1994-03-01
Color can be used as a very important cue for image recognition. In industrial and commercial areas, color is widely used as a trademark or identifying feature in objects, such as packaged goods, advertising signs, etc. In image database systems, one may retrieve an image of interest by specifying prominent colors and their locations in the image (image retrieval by contents). These facts enable us to detect or identify a target object using colors. However, this task depends mainly on how effectively we can identify a color and detect regions of the given color under possibly non-uniform illumination conditions such as shade, highlight, and strong contrast. In this paper, we present an effective method to detect regions matching given colors, along with the features of the region surfaces. We adopt the HVC color coordinates in the method because of its ability of completely separating the luminant and chromatic components of colors. Three basis functions functionally serving as the low-pass, high-pass, and band-pass filters, respectively, are introduced.
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
Research on image complexity evaluation method based on color information
NASA Astrophysics Data System (ADS)
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
A NEAR-INFRARED STUDY OF THE STAR-FORMING REGION RCW 34
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Walt, D. J.; De Villiers, H. M.; Czanik, R. J.
2012-07-15
We report the results of a near-infrared imaging study of a 7.8 Multiplication-Sign 7.8 arcmin{sup 2} region centered on the 6.7 GHz methanol maser associated with the RCW 34 star-forming region using the 1.4 m IRSF telescope at Sutherland. A total of 1283 objects were detected simultaneously in J, H, and K for an exposure time of 10,800 s. The J - H, H - K two-color diagram revealed a strong concentration of more than 700 objects with colors similar to what is expected of reddened classical T Tauri stars. The distribution of the objects on the K versus Jmore » - K color-magnitude diagram is also suggestive that a significant fraction of the 1283 objects is made up of lower mass pre-main-sequence stars. We also present the luminosity function for the subset of about 700 pre-main-sequence stars and show that it suggests ongoing star formation activity for about 10{sup 7} years. An examination of the spatial distribution of the pre-main-sequence stars shows that the fainter (older) part of the population is more dispersed over the observed region and the brighter (younger) subset is more concentrated around the position of the O8.5V star. This suggests that the physical effects of the O8.5V star and the two early B-type stars on the remainder of the cloud out of which they formed could have played a role in the onset of the more recent episode of star formation in RCW 34.« less
Color Retinal Image Enhancement Based on Luminosity and Contrast Adjustment.
Zhou, Mei; Jin, Kai; Wang, Shaoze; Ye, Juan; Qian, Dahong
2018-03-01
Many common eye diseases and cardiovascular diseases can be diagnosed through retinal imaging. However, due to uneven illumination, image blurring, and low contrast, retinal images with poor quality are not useful for diagnosis, especially in automated image analyzing systems. Here, we propose a new image enhancement method to improve color retinal image luminosity and contrast. A luminance gain matrix, which is obtained by gamma correction of the value channel in the HSV (hue, saturation, and value) color space, is used to enhance the R, G, and B (red, green and blue) channels, respectively. Contrast is then enhanced in the luminosity channel of L * a * b * color space by CLAHE (contrast-limited adaptive histogram equalization). Image enhancement by the proposed method is compared to other methods by evaluating quality scores of the enhanced images. The performance of the method is mainly validated on a dataset of 961 poor-quality retinal images. Quality assessment (range 0-1) of image enhancement of this poor dataset indicated that our method improved color retinal image quality from an average of 0.0404 (standard deviation 0.0291) up to an average of 0.4565 (standard deviation 0.1000). The proposed method is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness. This method of color retinal image enhancement may be employed to assist ophthalmologists in more efficient screening of retinal diseases and in development of improved automated image analysis for clinical diagnosis.
A simple approach to a vision-guided unmanned vehicle
NASA Astrophysics Data System (ADS)
Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye
2005-10-01
This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.
Quantitative characterization of color Doppler images: reproducibility, accuracy, and limitations.
Delorme, S; Weisser, G; Zuna, I; Fein, M; Lorenz, A; van Kaick, G
1995-01-01
A computer-based quantitative analysis for color Doppler images of complex vascular formations is presented. The red-green-blue-signal from an Acuson XP10 is frame-grabbed and digitized. By matching each image pixel with the color bar, color pixels are identified and assigned to the corresponding flow velocity (color value). Data analysis consists of delineation of a region of interest and calculation of the relative number of color pixels in this region (color pixel density) as well as the mean color value. The mean color value was compared to flow velocities in a flow phantom. The thyroid and carotid artery in a volunteer were repeatedly examined by a single examiner to assess intra-observer variability. The thyroids in five healthy controls were examined by three experienced physicians to assess the extent of inter-observer variability and observer bias. The correlation between the mean color value and flow velocity ranged from 0.94 to 0.96 for a range of velocities determined by pulse repetition frequency. The average deviation of the mean color value from the flow velocity was 22% to 41%, depending on the selected pulse repetition frequency (range of deviations, -46% to +66%). Flow velocity was underestimated with inadequately low pulse repetition frequency, or inadequately high reject threshold. An overestimation occurred with inadequately high pulse repetition frequency. The highest intra-observer variability was 22% (relative standard deviation) for the color pixel density, and 9.1% for the mean color value. The inter-observer variation was approximately 30% for the color pixel density, and 20% for the mean color value. In conclusion, computer assisted image analysis permits an objective description of color Doppler images. However, the user must be aware that image acquisition under in vivo conditions as well as physical and instrumental factors may considerably influence the results.
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan
2017-03-01
Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi
2014-06-01
We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.
Pet fur color and texture classification
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Mukherjee, Debarghar; Lim, SukHwan; Tretter, Daniel
2007-01-01
Object segmentation is important in image analysis for imaging tasks such as image rendering and image retrieval. Pet owners have been known to be quite vocal about how important it is to render their pets perfectly. We present here an algorithm for pet (mammal) fur color classification and an algorithm for pet (animal) fur texture classification. Per fur color classification can be applied as a necessary condition for identifying the regions in an image that may contain pets much like the skin tone classification for human flesh detection. As a result of the evolution, fur coloration of all mammals is caused by a natural organic pigment called Melanin and Melanin has only very limited color ranges. We have conducted a statistical analysis and concluded that mammal fur colors can be only in levels of gray or in two colors after the proper color quantization. This pet fur color classification algorithm has been applied for peteye detection. We also present here an algorithm for animal fur texture classification using the recently developed multi-resolution directional sub-band Contourlet transform. The experimental results are very promising as these transforms can identify regions of an image that may contain fur of mammals, scale of reptiles and feather of birds, etc. Combining the color and texture classification, one can have a set of strong classifiers for identifying possible animals in an image.
Color normalization for robust evaluation of microscopy images
NASA Astrophysics Data System (ADS)
Švihlík, Jan; Kybic, Jan; Habart, David
2015-09-01
This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.
Color constancy using bright-neutral pixels
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Luo, Yupin
2014-03-01
An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.
Image Reconstruction for Hybrid True-Color Micro-CT
Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge
2013-01-01
X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806
Color transfer algorithm in medical images
NASA Astrophysics Data System (ADS)
Wang, Weihong; Xu, Yangfa
2007-12-01
In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and brightness between a group of images of a certain organ could be quite different. The quality of these images could bring great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper introduces the principle of this algorithm and uses it in the medical image processing.
Global Binary Continuity for Color Face Detection With Complex Background
NASA Astrophysics Data System (ADS)
Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.
2017-08-01
In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Single underwater image enhancement based on color cast removal and visibility restoration
NASA Astrophysics Data System (ADS)
Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian
2016-05-01
Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2016-06-01
Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.
Color correction optimization with hue regularization
NASA Astrophysics Data System (ADS)
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung
2016-01-01
The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.
Frequency division multiplexed multi-color fluorescence microscope system
NASA Astrophysics Data System (ADS)
Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan
2017-10-01
Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.
SYNMAG PHOTOMETRY: A FAST TOOL FOR CATALOG-LEVEL MATCHED COLORS OF EXTENDED SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bundy, Kevin; Yasuda, Naoki; Hogg, David W.
2012-12-01
Obtaining reliable, matched photometry for galaxies imaged by different observatories represents a key challenge in the era of wide-field surveys spanning more than several hundred square degrees. Methods such as flux fitting, profile fitting, and PSF homogenization followed by matched-aperture photometry are all computationally expensive. We present an alternative solution called 'synthetic aperture photometry' that exploits galaxy profile fits in one band to efficiently model the observed, point-spread-function-convolved light profile in other bands and predict the flux in arbitrarily sized apertures. Because aperture magnitudes are the most widely tabulated flux measurements in survey catalogs, producing synthetic aperture magnitudes (SYNMAGs) enablesmore » very fast matched photometry at the catalog level, without reprocessing imaging data. We make our code public and apply it to obtain matched photometry between Sloan Digital Sky Survey ugriz and UKIDSS YJHK imaging, recovering red-sequence colors and photometric redshifts with a scatter and accuracy as good as if not better than FWHM-homogenized photometry from the GAMA Survey. Finally, we list some specific measurements that upcoming surveys could make available to facilitate and ease the use of SYNMAGs.« less
Road sign recognition with fuzzy adaptive pre-processing models.
Lin, Chien-Chuan; Wang, Ming-Shi
2012-01-01
A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance.
Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models
Lin, Chien-Chuan; Wang, Ming-Shi
2012-01-01
A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance. PMID:22778650
CFA-aware features for steganalysis of color images
NASA Astrophysics Data System (ADS)
Goljan, Miroslav; Fridrich, Jessica
2015-03-01
Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.
Citrus fruit recognition using color image analysis
NASA Astrophysics Data System (ADS)
Xu, Huirong; Ying, Yibin
2004-10-01
An algorithm for the automatic recognition of citrus fruit on the tree was developed. Citrus fruits have different color with leaves and branches portions. Fifty-three color images with natural citrus-grove scenes were digitized and analyzed for red, green, and blue (RGB) color content. The color characteristics of target surfaces (fruits, leaves, or branches) were extracted using the range of interest (ROI) tool. Several types of contrast color indices were designed and tested. In this study, the fruit image was enhanced using the (R-B) contrast color index because results show that the fruit have the highest color difference among the objects in the image. A dynamic threshold function was derived from this color model and used to distinguish citrus fruit from background. The results show that the algorithm worked well under frontlighting or backlighting condition. However, there are misclassifications when the fruit or the background is under a brighter sunlight.
An improved quantum watermarking scheme using small-scale quantum circuits and color scrambling
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya; Xiao, Hong; Cao, Maojun
2017-05-01
In order to solve the problem of embedding the watermark into the quantum color image, in this paper, an improved scheme of using small-scale quantum circuits and color scrambling is proposed. Both color carrier image and color watermark image are represented using novel enhanced quantum representation. The image sizes for carrier and watermark are assumed to be 2^{n+1}× 2^{n+2} and 2n× 2n, respectively. At first, the color of pixels in watermark image is scrambled using the controlled rotation gates, and then, the scrambled watermark with 2^n× 2^n image size and 24-qubit gray scale is expanded to an image with 2^{n+1}× 2^{n+2} image size and 3-qubit gray scale. Finally, the expanded watermark image is embedded into the carrier image by the controlled-NOT gates. The extraction of watermark is the reverse process of embedding it into carrier image, which is achieved by applying operations in the reverse order. Simulation-based experimental results show that the proposed scheme is superior to other similar algorithms in terms of three items, visual quality, scrambling effect of watermark image, and noise resistibility.
Astronomy with the Color Blind
ERIC Educational Resources Information Center
Smith, Donald A.; Melrose, Justyn
2014-01-01
The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the…
Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela
2007-05-01
Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
Color TV: total variation methods for restoration of vector-valued images.
Blomgren, P; Chan, T F
1998-01-01
We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1992-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.
Color preservation for tone reproduction and image enhancement
NASA Astrophysics Data System (ADS)
Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh
2014-01-01
Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.
NASA Astrophysics Data System (ADS)
Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao
2018-01-01
In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.
Color dithering methods for LEGO-like 3D printing
NASA Astrophysics Data System (ADS)
Sun, Pei-Li; Sie, Yuping
2015-01-01
Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.
Research of image retrieval technology based on color feature
NASA Astrophysics Data System (ADS)
Fu, Yanjun; Jiang, Guangyu; Chen, Fengying
2009-10-01
Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.
Guided color consistency optimization for image mosaicking
NASA Astrophysics Data System (ADS)
Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li
2018-01-01
This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.
Reconstruction of color images via Haar wavelet based on digital micromirror device
NASA Astrophysics Data System (ADS)
Liu, Xingjiong; He, Weiji; Gu, Guohua
2015-10-01
A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.
NASA Technical Reports Server (NTRS)
Sargent, Jeff Scott
1988-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi
2011-02-01
A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.
Dehazed Image Quality Assessment by Haze-Line Theory
NASA Astrophysics Data System (ADS)
Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai
2017-06-01
Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.
Jiang, Hao; Kaminska, Bozena
2018-04-24
To enable customized manufacturing of structural colors for commercial applications, up-scalable, low-cost, rapid, and versatile printing techniques are highly demanded. In this paper, we introduce a viable strategy for scaling up production of custom-input images by patterning individual structural colors on separate layers, which are then vertically stacked and recombined into full-color images. By applying this strategy on molded-ink-on-nanostructured-surface printing, we present an industry-applicable inkjet structural color printing technique termed multilayer molded-ink-on-nanostructured-surface (M-MIONS) printing, in which structural color pixels are molded on multiple layers of nanostructured surfaces. Transparent colorless titanium dioxide nanoparticles were inkjet-printed onto three separate transparent polymer substrates, and each substrate surface has one specific subwavelength grating pattern for molding the deposited nanoparticles into structural color pixels of red, green, or blue primary color. After index-matching lamination, the three layers were vertically stacked and bonded to display a color image. Each primary color can be printed into a range of different shades controlled through a half-tone process, and full colors were achieved by mixing primary colors from three layers. In our experiments, an image size as big as 10 cm by 10 cm was effortlessly achieved, and even larger images can potentially be printed on recombined grating surfaces. In one application example, the M-MIONS technique was used for printing customizable transparent color optical variable devices for protecting personalized security documents. In another example, a transparent diffractive color image printed with the M-MIONS technique was pasted onto a transparent panel for overlaying colorful information onto one's view of reality.
Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.
Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir
2016-08-01
Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.
An effective method on pornographic images realtime recognition
NASA Astrophysics Data System (ADS)
Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui
2013-03-01
In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.
NASA Technical Reports Server (NTRS)
Denman, Kenneth L.; Abbott, Mark R.
1988-01-01
The rate of decorrelation of surface chlorophyll patterns as a function of the time separation between pairs of images was determined from two sequences of CZCS images of the Pacific Ocean area adjacent to Vancouver Island, Canada; cloud-free subareas were selected that were common to several images separated in time by 1-17 days. Image pairs were subjected to two-dimensional autospectrum and cross-spectrum analysis in an array processor, and squared coherence estimates found for several wave bands were plotted against time separation, in analogy with a time-lagged cross correlation function. It was found that, for wavelengths of 50-150 km, significant coherence was lost after 7-10 days, while for wavelengths of 25-50 km, significant coherence was lost after only 5-7 days. In both cases, offshore regions maintained coherence longer than coastal regions.
Juno's Eighth Close Approach to Jupiter
2017-09-08
This series of enhanced-color images shows Jupiter up close and personal, as NASA's Juno spacecraft performed its eighth flyby of the gas giant planet. The images were obtained by JunoCam. From left to right, the sequence of images taken on Sept. 1, 2017 from 3:03 p.m. to 3:11 p.m. PDT (6:03 p.m. to 6:11 p.m. EDT). At the times the images were taken, the spacecraft ranged from 7,545 to 14,234 miles (12,143 to 22,908 kilometers) from the tops of the clouds of the planet at a latitude range of -28.5406 to -44.4912 degrees. Points of Interest include "Dalmatian Zone/Eye of Odin," "Dark Eye/STB Ghost East End," "Coolest Place on Jupiter," and "Renslow/Hurricane Rachel." The final image in the series on the right shows Jupiter's south pole coming into view. https://photojournal.jpl.nasa.gov/catalog/PIA21780
Securing Color Fidelity in 3D Architectural Heritage Scenarios.
Gaiani, Marco; Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio
2017-10-25
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy ('color characterization').
Correlation based efficient face recognition and color change detection
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.
2013-01-01
Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.
New Windows based Color Morphological Operators for Biomedical Image Processing
NASA Astrophysics Data System (ADS)
Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia
2016-04-01
Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.
Securing Color Fidelity in 3D Architectural Heritage Scenarios
Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio
2017-01-01
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’). PMID:29068359
NASA Astrophysics Data System (ADS)
Saleheen, Firdous; Badano, Aldo; Cheng, Wei-Chung
2017-03-01
The color reproducibility of two whole-slide imaging (WSI) devices was evaluated with biological tissue slides. Three tissue slides (human colon, skin, and kidney) were used to test a modern and a legacy WSI devices. The color truth of the tissue slides was obtained using a multispectral imaging system. The output WSI images were compared with the color truth to calculate the color difference for each pixel. A psychophysical experiment was also conducted to measure the perceptual color reproducibility (PCR) of the same slides with four subjects. The experiment results show that the mean color differences of the modern, legacy, and monochrome WSI devices are 10.94+/-4.19, 22.35+/-8.99, and 42.74+/-2.96 ▵E00, while their mean PCRs are 70.35+/-7.64%, 23.06+/-14.68%, and 0.91+/-1.01%, respectively.
Physics and psychophysics of color reproduction
NASA Astrophysics Data System (ADS)
Giorgianni, Edward J.
1991-08-01
The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.
Real life identification of partially occluded weapons in video frames
NASA Astrophysics Data System (ADS)
Hempelmann, Christian F.; Arslan, Abdullah N.; Attardo, Salvatore; Blount, Grady P.; Sirakov, Nikolay M.
2016-05-01
We empirically test the capacity of an improved system to identify not just images of individual guns, but partially occluded guns and their parts appearing in a videoframe. This approach combines low-level geometrical information gleaned from the visual images and high-level semantic information stored in an ontology enriched with meronymic part-whole relations. The main improvements of the system are handling occlusion, new algorithms, and an emerging meronomy. Well-known and commonly deployed in ontologies, actual meronomies need to be engineered and populated with unique solutions. Here, this includes adjacency of weapon parts and essentiality of parts to the threat of and the diagnosticity for a weapon. In this study video sequences are processed frame by frame. The extraction method separates colors and removes the background. Then image subtraction of the next frame determines moving targets, before morphological closing is applied to the current frame in order to clean up noise and fill gaps. Next, the method calculates for each object the boundary coordinates and uses them to create a finite numerical sequence as a descriptor. Parts identification is done by cyclic sequence alignment and matching against the nodes of the weapons ontology. From the identified parts, the most-likely weapon will be determined by using the weapon ontology.
A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera
NASA Astrophysics Data System (ADS)
Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin
2014-12-01
The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.
NASA Astrophysics Data System (ADS)
Sakamoto, Takashi
2015-01-01
This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.
High-Frame-Rate Doppler Ultrasound Using a Repeated Transmit Sequence
Podkowa, Anthony S.; Oelze, Michael L.; Ketterling, Jeffrey A.
2018-01-01
The maximum detectable velocity of high-frame-rate color flow Doppler ultrasound is limited by the imaging frame rate when using coherent compounding techniques. Traditionally, high quality ultrasonic images are produced at a high frame rate via coherent compounding of steered plane wave reconstructions. However, this compounding operation results in an effective downsampling of the slow-time signal, thereby artificially reducing the frame rate. To alleviate this effect, a new transmit sequence is introduced where each transmit angle is repeated in succession. This transmit sequence allows for direct comparison between low resolution, pre-compounded frames at a short time interval in ways that are resistent to sidelobe motion. Use of this transmit sequence increases the maximum detectable velocity by a scale factor of the transmit sequence length. The performance of this new transmit sequence was evaluated using a rotating cylindrical phantom and compared with traditional methods using a 15-MHz linear array transducer. Axial velocity estimates were recorded for a range of ±300 mm/s and compared to the known ground truth. Using these new techniques, the root mean square error was reduced from over 400 mm/s to below 50 mm/s in the high-velocity regime compared to traditional techniques. The standard deviation of the velocity estimate in the same velocity range was reduced from 250 mm/s to 30 mm/s. This result demonstrates the viability of the repeated transmit sequence methods in detecting and quantifying high-velocity flow. PMID:29910966
2017-07-13
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 59750 Latitude: -10.5452 Longitude: 290.307 Instrument: VIS Captured: 2015-06-03 12:33 https://photojournal.jpl.nasa.gov/catalog/PIA21705
2015-08-21
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 10289 Latitude: -9.9472 Longitude: 285.933 Instrument: VIS Captured: 2004-04-09 12:43 http://photojournal.jpl.nasa.gov/catalog/PIA19756
NASA Astrophysics Data System (ADS)
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
Two-stage color palettization for error diffusion
NASA Astrophysics Data System (ADS)
Mitra, Niloy J.; Gupta, Maya R.
2002-06-01
Image-adaptive color palettization chooses a decreased number of colors to represent an image. Palettization is one way to decrease storage and memory requirements for low-end displays. Palettization is generally approached as a clustering problem, where one attempts to find the k palette colors that minimize the average distortion for all the colors in an image. This would be the optimal approach if the image was to be displayed with each pixel quantized to the closest palette color. However, to improve the image quality the palettization may be followed by error diffusion. In this work, we propose a two-stage palettization where the first stage finds some m << k clusters, and the second stage chooses palette points that cover the spread of each of the M clusters. After error diffusion, this method leads to better image quality at less computational cost and with faster display speed than full k-means palettization.
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Kim, Bum Joon; Kim, Yong-Hwan; Kim, Yeon-Jung; Ahn, Sung Ho; Lee, Deok Hee; Kwon, Sun U; Kim, Sang Joon; Kim, Jong S; Kang, Dong-Wha
2014-09-01
Diffusion-weighted image fluid-attenuated inversion recovery (FLAIR) mismatch has been considered to represent ischemic lesion age. However, the inter-rater agreement of diffusion-weighted image FLAIR mismatch is low. We hypothesized that color-coded images would increase its inter-rater agreement. Patients with ischemic stroke <24 hours of a clear onset were retrospectively studied. FLAIR signal change was rated as negative, subtle, or obvious on conventional and color-coded FLAIR images based on visual inspection. Inter-rater agreement was evaluated using κ and percent agreement. The predictive value of diffusion-weighted image FLAIR mismatch for identification of patients <4.5 hours of symptom onset was evaluated. One hundred and thirteen patients were enrolled. The inter-rater agreement of FLAIR signal change improved from 69.9% (k=0.538) with conventional images to 85.8% (k=0.754) with color-coded images (P=0.004). Discrepantly rated patients on conventional, but not on color-coded images, had a higher prevalence of cardioembolic stroke (P=0.02) and cortical infarction (P=0.04). The positive predictive value for patients <4.5 hours of onset was 85.3% and 71.9% with conventional and 95.7% and 82.1% with color-coded images, by each rater. Color-coded FLAIR images increased the inter-rater agreement of diffusion-weighted image FLAIR recovery mismatch and may ultimately help identify unknown-onset stroke patients appropriate for thrombolysis. © 2014 American Heart Association, Inc.
An Underwater Color Image Quality Evaluation Metric.
Yang, Miao; Sowmya, Arcot
2015-12-01
Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.
Analisis fotometrico del cumulo abierto NGC 6611
NASA Astrophysics Data System (ADS)
Suarez Nunez, Johanna
2007-08-01
Matlab programs were designed to apply differential aperture photometry. Two images were taken with a charge-couple device ( CCD ) in the visible V and blue filters, to calculate physical parameters (the flux( f ), the apparent magnitude ( m V ) and its reddening corrected value ( V 0 ), color index ( B- V ) and ( B-V ) 0 , the log of effective temperature (log T eff ), the absolute magnitude ( M V ), the bolometric magnitude ( M B ) & log(L [low *] /[Special characters omitted.] )) of each studied star pertaining to the open cluster NGC 6611. Upon obtaining the parameters, the color-magnitude diagram was graphed and by fitting to the main sequence, the distance modulus and thus the distance to the cluster was found. The stars were assumed to be at the same distance and born at approximately the same moment.
Effects of Colored Noise on Periodic Orbits in a One-Dimensional Map
NASA Astrophysics Data System (ADS)
Li, Feng-Guo; Ai, Bao-Quan
2011-06-01
Noise can induce inverse period-doubling transition and chaos. The effects of the colored noise on periodic orbits, of the different periodic sequences in the logistic map, are investigated. It is found that the dynamical behaviors of the orbits, induced by an exponentially correlated colored noise, are different in the mergence of transition, and the effects of the noise intensity on their dynamical behaviors are different from the effects of the correlation time of noise. Remarkably, the noise can induce new periodic orbits, namely, two new orbits emerge in the period-four sequence at the bifurcation parameter value μ = 3.5, four new orbits in the period-eight sequence at μ = 3.55, and three new orbits in the period-six sequence at μ = 3.846, respectively. Moreover, the dynamical behaviors of the new orbits clearly show the resonancelike response to the colored noise.
A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.
Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif
2012-08-01
This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Three Fresh Exposures, Enhanced Color
NASA Technical Reports Server (NTRS)
2004-01-01
This enhanced-color panoramic camera image from the Mars Exploration Rover Opportunity features three holes created by the rock abrasion tool between sols 143 and 148 (June 18 and June 23, 2004) inside 'Endurance Crater.' The enhanced image makes the red colors a little redder and blue colors a little bluer, allowing viewers to see differences too subtle to be seen without the exaggeration. When compared with an approximately true color image, the tailings from the rock abrasion tool and the interior of the abraded holes are more prominent in this view. Being able to discriminate color variations helps scientists determine rocks' compositional differences and texture variations. This image was created using the 753-, 535- and 432-nanometer filters.Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT
NASA Astrophysics Data System (ADS)
Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep
2014-08-01
In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.
Single-exposure quantitative phase imaging in color-coded LED microscopy.
Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin
2017-04-03
We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.
Video and thermal imaging system for monitoring interiors of high temperature reaction vessels
Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL
2012-01-10
A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.
Pediconi, Federica; Catalano, Carlo; Venditti, Fiammetta; Ercolani, Mauro; Carotenuto, Luigi; Padula, Simona; Moriconi, Enrica; Roselli, Antonella; Giacomelli, Laura; Kirchin, Miles A; Passariello, Roberto
2005-07-01
The objective of this study was to evaluate the value of a color-coded automated signal intensity curve software package for contrast-enhanced magnetic resonance mammography (CE-MRM) in patients with suspected breast cancer. Thirty-six women with suspected breast cancer based on mammographic and sonographic examinations were preoperatively evaluated on CE-MRM. CE-MRM was performed on a 1.5-T magnet using a 2D Flash dynamic T1-weighted sequence. A dosage of 0.1 mmol/kg of Gd-BOPTA was administered at a flow rate of 2 mL/s followed by 10 mL of saline. Images were analyzed with the new software package and separately with a standard display method. Statistical comparison was performed of the confidence for lesion detection and characterization with the 2 methods and of the diagnostic accuracy for characterization compared with histopathologic findings. At pathology, 54 malignant lesions and 14 benign lesions were evaluated. All 68 (100%) lesions were detected with both methods and good correlation with histopathologic specimens was obtained. Confidence for both detection and characterization was significantly (P < or = 0.025) better with the color-coded method, although no difference (P > 0.05) between the methods was noted in terms of the sensitivity, specificity, and overall accuracy for lesion characterization. Excellent agreement between the 2 methods was noted for both the determination of lesion size (kappa = 0.77) and determination of SI/T curves (kappa = 0.85). The novel color-coded signal intensity curve software allows lesions to be visualized as false color maps that correspond to conventional signal intensity time curves. Detection and characterization of breast lesions with this method is quick and easily interpretable.
Super Resolution Image of Yogi
NASA Technical Reports Server (NTRS)
1997-01-01
Yogi is a meter-size rock about 5 meters northwest of the Mars Pathfinder lander and was the second rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This mosaic shows super resolution techniques applied to the second APXS target rock, which was poorly illuminated in the rover's forward camera view taken before the instrument was deployed. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.
This mosaic of Yogi was produced by combining four 'Super Pan' frames taken with the IMP camera. This composite color mosaic consists of 7 frames from the right eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. This panchromatic frame was then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. Shadows were processed separately from the rest of the rock and combined with the rest of the scene to bring out details in the shadow of Yogi that would be too dark to view at the same time as the sunlit surfaces.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).Use of discrete chromatic space to tune the image tone in a color image mosaic
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li
2003-09-01
Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.
Computerized simulation of color appearance for anomalous trichromats using the multispectral image.
Yaguchi, Hirohisa; Luo, Junyan; Kato, Miharu; Mizokami, Yoko
2018-04-01
Most color simulators for color deficiencies are based on the tristimulus values and are intended to simulate the appearance of an image for dichromats. Statistics show that there are more anomalous trichromats than dichromats. Furthermore, the spectral sensitivities of anomalous cones are different from those of normal cones. Clinically, the types of color defects are characterized through Rayleigh color matching, where the observer matches a spectral yellow to a mixture of spectral red and green. The midpoints of the red/green ratios deviate from a normal trichromat. This means that any simulation based on the tristimulus values defined by a normal trichromat cannot predict the color appearance of anomalous Rayleigh matches. We propose a computerized simulation of the color appearance for anomalous trichromats using multispectral images. First, we assume that anomalous trichromats possess a protanomalous (green shifted) or deuteranomalous (red shifted) pigment instead of a normal (L or M) one. Second, we assume that the luminance will be given by L+M, and red/green and yellow/blue opponent color stimulus values are defined through L-M and (L+M)-S, respectively. Third, equal-energy white will look white for all observers. The spectral sensitivities of the luminance and the two opponent color channels are multiplied by the spectral radiance of each pixel of a multispectral image to give the luminance and opponent color stimulus values of the entire image. In the next stage of color reproduction for normal observers, the luminance and two opponent color channels are transformed into XYZ tristimulus values and then transformed into sRGB to reproduce a final image for anomalous trichromats. The proposed simulation can be used to predict the Rayleigh color matches for anomalous trichromats. We also conducted experiments to evaluate the appearance of simulated images by color deficient observers and verified the reliability of the simulation.
The Rich Color Variations of Pluto
2015-09-24
NASA's New Horizons spacecraft captured this high-resolution enhanced color view of Pluto on July 14, 2015. The image combines blue, red and infrared images taken by the Ralph/Multispectral Visual Imaging Camera (MVIC). Pluto's surface sports a remarkable range of subtle colors, enhanced in this view to a rainbow of pale blues, yellows, oranges, and deep reds. Many landforms have their own distinct colors, telling a complex geological and climatological story that scientists have only just begun to decode. The image resolves details and colors on scales as small as 0.8 miles (1.3 kilometers). http://photojournal.jpl.nasa.gov/catalog/PIA19952
NPS assessment of color medical image displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired
2016-10-11
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows dust devil tracks (dark blue linear feature) in Terra Cimmeria. Orbit Number: 43463 Latitude: -53.1551 Longitude: 125.069 Instrument: VIS Captured: 2011-10-01 23:55 http://photojournal.jpl.nasa.gov/catalog/PIA21009
2017-06-01
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Russell Crater in Noachis Terra. Orbit Number: 59591 Latitude: -54.471 Longitude: 13.1288 Instrument: VIS Captured: 2015-05-21 10:57 https://photojournal.jpl.nasa.gov/catalog/PIA21674
Kamei, Ryotaro; Watanabe, Yuji; Sagiyama, Koji; Isoda, Takuro; Togao, Osamu; Honda, Hiroshi
2018-05-23
To investigate the optimal monochromatic color combination for fusion imaging of FDG-PET and diffusion-weighted MR images (DW) regarding lesion conspicuity of each image. Six linear monochromatic color-maps of red, blue, green, cyan, magenta, and yellow were assigned to each of the FDG-PET and DW images. Total perceptual color differences of the lesions were calculated based on the lightness and chromaticity measured with the photometer. Visual lesion conspicuity was also compared among the PET-only, DW-only and PET-DW-double positive portions with mean conspicuity scores. Statistical analysis was performed with a one-way analysis of variance and Spearman's rank correlation coefficient. Among all the 12 possible monochromatic color-map combinations, the 3 combinations of red/cyan, magenta/green, and red/green produced the highest conspicuity scores. Total color differences between PET-positive and double-positive portions correlated with conspicuity scores (ρ = 0.2933, p < 0.005). Lightness differences showed a significant negative correlation with conspicuity scores between the PET-only and DWI-only positive portions. Chromaticity differences showed a marginally significant correlation with conspicuity scores between DWI-positive and double-positive portions. Monochromatic color combinations can facilitate the visual evaluation of FDG-uptake and diffusivity as well as registration accuracy on the FDG-PET/DW fusion images, when red- and green-colored elements are assigned to FDG-PET and DW images, respectively.
On-screen-display (OSD) menu detection for proper stereo content reproduction for 3D TV
NASA Astrophysics Data System (ADS)
Tolstaya, Ekaterina V.; Bucha, Victor V.; Rychagov, Michael N.
2011-03-01
Modern consumer 3D TV sets are able to show video content in two different modes: 2D and 3D. In 3D mode, stereo pair comes from external device such as Blue-ray player, satellite receivers etc. The stereo pair is split into left and right images that are shown one after another. The viewer sees different image for left and right eyes using shutter-glasses properly synchronized with a 3DTV. Besides, some devices that provide TV with a stereo content are able to display some additional information by imposing an overlay picture on video content, an On-Screen-Display (OSD) menu. Some OSDs are not always 3D compatible and lead to incorrect 3D reproduction. In this case, TV set must recognize the type of OSD, whether it is 3D compatible, and visualize it correctly by either switching off stereo mode, or continue demonstration of stereo content. We propose a new stable method for detection of 3D incompatible OSD menus on stereo content. Conventional OSD is a rectangular area with letters and pictograms. OSD menu can be of different transparency levels and colors. To be 3D compatible, an OSD is overlaid separately on both images of a stereo pair. The main problem in detecting OSD is to distinguish whether the color difference is due to OSD presence, or due to stereo parallax. We applied special techniques to find reliable image difference and additionally used a cue that usually OSD has very implicit geometrical features: straight parallel lines. The developed algorithm was tested on our video sequences database, with several types of OSD with different colors and transparency levels overlaid upon video content. Detection quality exceeded 99% of true answers.
NASA Astrophysics Data System (ADS)
Hirose, Misa; Toyota, Saori; Tsumura, Norimichi
2018-02-01
In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
Image Retrieval by Color Semantics with Incomplete Knowledge.
ERIC Educational Resources Information Center
Corridoni, Jacopo M.; Del Bimbo, Alberto; Vicario, Enrico
1998-01-01
Presents a system which supports image retrieval by high-level chromatic contents, the sensations that color accordances generate on the observer. Surveys Itten's theory of color semantics and discusses image description and query specification. Presents examples of visual querying. (AEF)
1995-02-01
modification of existing JPEG compression and decompression software available from Independent JPEG Users Group to process CIELAB color images and to use...externally specificed Huffman tables. In addition a conversion program was written to convert CIELAB color space images to red, green, blue color space
Balaban, M O; Aparicio, J; Zotarelli, M; Sims, C
2008-11-01
The average colors of mangos and apples were measured using machine vision. A method to quantify the perception of nonhomogeneous colors by sensory panelists was developed. Three colors out of several reference colors and their perceived percentage of the total sample area were selected by untrained panelists. Differences between the average colors perceived by panelists and those from the machine vision were reported as DeltaE values (color difference error). Effects of nonhomogeneity of color, and using real samples or their images in the sensory panels on DeltaE were evaluated. In general, samples with more nonuniform colors had higher DeltaE values, suggesting that panelists had more difficulty in evaluating more nonhomogeneous colors. There was no significant difference in DeltaE values between the real fruits and their screen image, therefore images can be used to evaluate color instead of the real samples.
False Color Mosaic Great Red Spot
NASA Technical Reports Server (NTRS)
1996-01-01
False color representation of Jupiter's Great Red Spot (GRS) taken through three different near-infrared filters of the Galileo imaging system and processed to reveal cloud top height. Images taken through Galileo's near-infrared filters record sunlight beyond the visible range that penetrates to different depths in Jupiter's atmosphere before being reflected by clouds. The Great Red Spot appears pink and the surrounding region blue because of the particular color coding used in this representation. Light reflected by Jupiter at a wavelength (886 nm) where methane strongly absorbs is shown in red. Due to this absorption, only high clouds can reflect sunlight in this wavelength. Reflected light at a wavelength (732 nm) where methane absorbs less strongly is shown in green. Lower clouds can reflect sunlight in this wavelength. Reflected light at a wavelength (757 nm) where there are essentially no absorbers in the Jovian atmosphere is shown in blue: This light is reflected from the deepest clouds. Thus, the color of a cloud in this image indicates its height. Blue or black areas are deep clouds; pink areas are high, thin hazes; white areas are high, thick clouds. This image shows the Great Red Spot to be relatively high, as are some smaller clouds to the northeast and northwest that are surprisingly like towering thunderstorms found on Earth. The deepest clouds are in the collar surrounding the Great Red Spot, and also just to the northwest of the high (bright) cloud in the northwest corner of the image. Preliminary modeling shows these cloud heights vary over 30 km in altitude. This mosaic, of eighteen images (6 in each filter) taken over a 6 minute interval during the second GRS observing sequence on June 26, 1996, has been map-projected to a uniform grid of latitude and longitude. North is at the top.
Launched in October 1989, Galileo entered orbit around Jupiter on December 7, 1995. The spacecraft's mission is to conduct detailed studies of the giant planet, its largest moons and the Jovian magnetic environment. The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoRestoration of color in a remote sensing image and its quality evaluation
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe
2003-09-01
This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.
Population attribute compression
White, James M.; Faber, Vance; Saltzman, Jeffrey S.
1995-01-01
An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.
NASA Astrophysics Data System (ADS)
Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.
2012-05-01
One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.
Russell Crater Dunes - False Color
2017-07-07
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the large dune form on the floor of Russell Crater. Orbit Number: 59672 Latitude: -54.337 Longitude: 13.1087 Instrument: VIS Captured: 2015-05-28 02:39 https://photojournal.jpl.nasa.gov/catalog/PIA21701
NASA Astrophysics Data System (ADS)
Iwai, Daiki; Suganami, Haruka; Hosoba, Minoru; Ohno, Kazuko; Emoto, Yutaka; Tabata, Yoshito; Matsui, Norihisa
2013-03-01
Color image consistency has not been accomplished yet except the Digital Imaging and Communication in Medicine (DICOM) Supplement 100 for implementing a color reproduction pipeline and device independent color spaces. Thus, most healthcare enterprises could not check monitor degradation routinely. To ensure color consistency in medical color imaging, monitor color calibration should be introduced. Using simple color calibration device . chromaticity of colors including typical color (Red, Green, Blue, Green and White) are measured as device independent profile connection space value called u'v' before and after calibration. In addition, clinical color images are displayed and visual differences are observed. In color calibration, monitor brightness level has to be set to quite lower value 80 cd/m2 according to sRGB standard. As Maximum brightness of most color monitors available currently for medical use have much higher brightness than 80 cd/m2, it is not seemed to be appropriate to use 80 cd/m2 level for calibration. Therefore, we propose that new brightness standard should be introduced while maintaining the color representation in clinical use. To evaluate effects of brightness to chromaticity experimentally, brightness level is changed in two monitors from 80 to 270cd/m2 and chromaticity value are compared with each brightness levels. As a result, there are no significant differences in chromaticity diagram when brightness levels are changed. In conclusion, chromaticity is close to theoretical value after color calibration. Moreover, chromaticity isn't moved when brightness is changed. The results indicate optimized reference brightness level for clinical use could be set at high brightness in current monitors .
Dual-color 3D superresolution microscopy by combined spectral-demixing and biplane imaging.
Winterflood, Christian M; Platonova, Evgenia; Albrecht, David; Ewers, Helge
2015-07-07
Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Spectral edge: gradient-preserving spectral mapping for image fusion.
Connah, David; Drew, Mark S; Finlayson, Graham D
2015-12-01
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
NPS assessment of color medical displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-02-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
Quasar Host Galaxies/Neptune Rotation/Galaxy Building Blocks/Hubble Deep Field/Saturn Storm
NASA Technical Reports Server (NTRS)
2001-01-01
Computerized animations simulate a quasar erupting in the core of a normal spiral galaxy, the collision of two interacting galaxies, and the evolution of the universe. Hubble Space Telescope (HST) images show six quasars' host galaxies (including spirals, ellipticals, and colliding galaxies) and six clumps of galaxies approximately 11 billion light years away. A false color time lapse movie of Neptune displays the planet's 16-hour rotation, and the evolution of a storm on Saturn is seen though a video of the planet's rotation. A zoom sequence starts with a ground-based image of the constellation Ursa major and ends with the Hubble Deep Field through progressively narrower and deeper views.
GREAT: a gradient-based color-sampling scheme for Retinex.
Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo
2017-04-01
Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.
Single Lens Dual-Aperture 3D Imaging System: Color Modeling
NASA Technical Reports Server (NTRS)
Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael
2012-01-01
In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.
A blind dual color images watermarking based on IWT and state coding
NASA Astrophysics Data System (ADS)
Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu
2012-04-01
In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.
NASA Astrophysics Data System (ADS)
Bachche, Shivaji; Oka, Koichi
2013-06-01
This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.
a New Color Correction Method for Underwater Imaging
NASA Astrophysics Data System (ADS)
Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.
2015-04-01
Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.
Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.
Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi
2017-05-28
In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.
An instructional guide for leaf color analysis using digital imaging software
Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg
2005-01-01
Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...
Fuzzy Logic-Based Filter for Removing Additive and Impulsive Noise from Color Images
NASA Astrophysics Data System (ADS)
Zhu, Yuhong; Li, Hongyang; Jiang, Huageng
2017-12-01
This paper presents an efficient filter method based on fuzzy logics for adaptively removing additive and impulsive noise from color images. The proposed filter comprises two parts including noise detection and noise removal filtering. In the detection part, the fuzzy peer group concept is applied to determine what type of noise is added to each pixel of the corrupted image. In the filter part, the impulse noise is deducted by the vector median filter in the CIELAB color space and an optimal fuzzy filter is introduced to reduce the Gaussian noise, while they can work together to remove the mixed Gaussian-impulse noise from color images. Experimental results on several color images proves the efficacy of the proposed fuzzy filter.
Compression of color-mapped images
NASA Technical Reports Server (NTRS)
Hadenfeldt, A. C.; Sayood, Khalid
1992-01-01
In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.
Snow Storm Blankets Southeastern U.S.
NASA Technical Reports Server (NTRS)
2002-01-01
A new year's storm brought heavy snow to portions of the southeastern United States, with some regions receiving more than a foot in less than two days. By Friday, January 4, 2002, the skies had cleared, and MODIS captured this false-color image showing the extent of the snowfall. Snow cover is red, and extends all the way from Alabama (lower left), up through Georgia, South Carolina, North Carolina, Virginia, and Maryland, including the southern reaches of the Delmarva Peninsula (upper right). Beneath some clouds in West Virginia (top center), snow is also visible on the Allegheny Mountains and the Appalachian Plateau, although it did come from the same storm. Though red isn't the color we associate with snow, scientists often find 'false-color' images more useful than 'true-color' images in certain situations. True-color images are images in which the satellite data are made to look like what our eyes would see, using a combination of red, green, and blue. In a true-color image of this scene, cloud and snow would appear almost identical-both would be very bright white-and would be hard to distinguish from each other. However, at near-infrared wavelengths of light, snow cover absorbs sunlight and therefore appears much darker than clouds. So a false-color image in which one visible wavelength of the data is colored red, and different near-infrared wavelengths are colored green and blue helps show the snow cover most clearly.
Color analysis of the human airway wall
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Deepa; McLennan, Geoffrey; Donnelley, Martin; Delsing, Angela; Suter, Melissa; Flaherty, Dawn; Zabner, Joseph; Hoffman, Eric A.; Reinhardt, Joseph M.
2002-04-01
A bronchoscope can be used to examine the mucosal surface of the airways for abnormalities associated with a variety of lung diseases. The diagnosis of these abnormalities through the process of bronchoscopy is based, in part, on changes in airway wall color. Therefore it is important to characterize the normal color inside the airways. We propose a standardized method to calibrate the bronchoscopic imaging system and to tabulate the normal colors of the airway. Our imaging system consists of a Pentium PC and video frame grabber, coupled with a true color bronchoscope. The calibration procedure uses 24 standard color patches. Images of these color patches at three different distances (1, 1.5, and 2 cm) were acquired using the bronchoscope in a darkened room, to assess repeatability and sensitivity to illumination. The images from the bronchoscope are in a device-dependent Red-Green-Blue (RGB) color space, which was converted to a tri-stimulus image and then into a device-independent color space sRGB image by a fixed polynomial transformation. Images were acquired from five normal human volunteer subjects, two cystic fibrosis (CF) patients and one normal heavy smoker subject. The hue and saturation values of regions within the normal airway were tabulated and these values were compared with the values obtained from regions within the airways of the CF patients and the normal heavy smoker. Repeated measurements of the same region in the airways showed no measurable change in hue or saturation.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Wang, Jian; Kang, Chunsong; Feng, Tinghua; Xue, Jiping; Shi, Kailing; Li, Tingting; Liu, Xiaofang; Wang, Yu
2013-05-01
The purpose of this study was to investigate the effects of ultrasonic instrument gain, transducer frequency, and depth on the color variety and color filling of radiofrequency ultrasonic local estimators (RULES) images which indicated specific physical representation of liquid-containing lesions in order to find the optimal settings for the clinical application of RULES in liquid-containing lesions. Changing the ultrasonic instrument gain, transducer frequency, and depth affected the color filling and color variety of 21 pathologically-confirmed liquid-containing lesion images analyzed by RULES. Blue colored fill dominated the RULES images to represent the liquid-containing lesions. A frequency of 12.5MHz led to red and green colors along the inner edges of the liquid-containing lesions. Changing the gain resulted in significantly different blue colored filling that was highest when the gain was 90 to 100. Changing the frequency also significantly changed the blue color filling, with the highest filling occurring at 12.5MHz. Changing the depth did not affect the blue color filling. The liquid components of the lesions may be identified by their characteristic manifestations in RULES, where color variety is affected by transducer frequency and blue color filling which represent liquid-containing lesions in RULES images is affected by frequency and gain. Copyright © 2012. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Mobile image based color correction using deblurring
NASA Astrophysics Data System (ADS)
Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.
2015-03-01
Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.
NASA Technical Reports Server (NTRS)
Denman, Kenneth L.; Abbott, Mark R.
1994-01-01
We have selected square subareas (110 km on a side) from coastal zone color scanner (CZCS) and advanced very high resolution radiometer (AVHRR) images for 1981 in the California Current region off northern California for which we could identify sequences of cloud-free data over periods of days to weeks. We applied a two-dimensional fast Fourier transformation to images after median filtering, (x, y) plane removal, and cosine tapering. We formed autospectra and coherence spectra as functions of a scalar wavenumber. Coherence estimates between pairs of images were plotted against time separation between images for several wide wavenumber bands to provide a temporal lagged coherence function. The temporal rate of loss of correlation (decorrelation time scale) in surface patterns provides a measure of the rate of pattern change or evolution as a function of spatial dimension. We found that patterns evolved (or lost correlation) approximately twice as rapidly in upwelling jets as in the 'quieter' regions between jets. The rapid evolution of pigment patterns (lifetime of about 1 week or less for scales of 50-100 km) ought to hinder biomass transfer to zooplankton predators compared with phytoplankton patches that persist for longer times. We found no significant differences between the statistics of CZCS and AVHRR images (spectral shape or rate of decorrelation). In addition, in two of the three areas studied, the peak correlation between AVHRR and CZCS images from the same area occurred at zero lag, indicating that the patterns evolved simutaneously. In the third area, maximum coherence between thermal and pigment patterns occurred when pigment images lagged thermal images by 1-2 days, mirroring the expected lag of high pigment behind low temperatures (and high nutrients) in recently upwelled water. We conclude that in dynamic areas such as coastal upwelling systems, the phytoplankton cells (identified by pigment color patterns) behave largely as passive scalars at the mesoscale and that growth, death, and sinking of phytoplankton collectively play at most a mariginal role in determining the spectral statistics of the pigment patterns.
SUBARU near-infrared multi-color images of Class II Young Stellar Object, RNO91
NASA Astrophysics Data System (ADS)
Mayama, Satoshi; Tamura, Motohide; Hayashi, Masahiko
RNO91 is class II source currently in a transition phase between a protostar and a main-sequence star. It is known as a source of complex molecular outflows. Previous studies suggested that RNO91 was associated with a reflection nebula, a CO outflow, shock-excited H[2] emission, and disk type structure. But geometry of RNO91, especially its inner region, is not well confirmed yet. High resolution imaging is needed to understand the nature of RNO91 and its interaction with outflow. Furthermore, RNO91 is an important candidate for studying YSOs in a transition phase. Thus, we conducted near-infrared imaging observations of RNO91 with the infrared camera CIAO mounted on the Subaru 8.2m Telescope. We present JHK band and optical images which resolve a complex asymmetrical circumstellar structure. We examined the color of RNO91 nebula and compare the geometry of the system suggested by our data with that already proposed on the basis of other studies. Our main results are as follows; 1. At J and optical, several bluer clumps are detected and they are aligned nearly perpendicular to the outflow axis. 2. The NIR images show significant halo emission detected within 2'' around the peak position while less halo emission is seen in the optical image. The nebula appears to become more circular and more diffuse with increasing wavelengths. The power-law dependence of radial surface brightness profile is shallower than that of normal stars, indicating that RNO91 is still optically thick objects. We suggest that the halo emission is the NIR light scattered by an optically thick disk or envelope surrounding the RNO91. 3. In the shorter wavelength images, the nebula appears to become more extended (2".3 long) to the southwest. This extended emission might trace a bottom of outflow emanating to southwest direction. 4. Color composite image of RNO91 reveals that the emission extending to the north and to the east through RNO91 is interpreted as a part of the cavity wall seen relatively edge-on. The northern ridge is 11" long and eastern ridge is 7" long.
Animal Detection in Natural Images: Effects of Color and Image Database
Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.
2013-01-01
The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744
Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.
2016-02-01
Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.
Data augmentation-assisted deep learning of hand-drawn partially colored sketches for visual search
Muhammad, Khan; Baik, Sung Wook
2017-01-01
In recent years, image databases are growing at exponential rates, making their management, indexing, and retrieval, very challenging. Typical image retrieval systems rely on sample images as queries. However, in the absence of sample query images, hand-drawn sketches are also used. The recent adoption of touch screen input devices makes it very convenient to quickly draw shaded sketches of objects to be used for querying image databases. This paper presents a mechanism to provide access to visual information based on users’ hand-drawn partially colored sketches using touch screen devices. A key challenge for sketch-based image retrieval systems is to cope with the inherent ambiguity in sketches due to the lack of colors, textures, shading, and drawing imperfections. To cope with these issues, we propose to fine-tune a deep convolutional neural network (CNN) using augmented dataset to extract features from partially colored hand-drawn sketches for query specification in a sketch-based image retrieval framework. The large augmented dataset contains natural images, edge maps, hand-drawn sketches, de-colorized, and de-texturized images which allow CNN to effectively model visual contents presented to it in a variety of forms. The deep features extracted from CNN allow retrieval of images using both sketches and full color images as queries. We also evaluated the role of partial coloring or shading in sketches to improve the retrieval performance. The proposed method is tested on two large datasets for sketch recognition and sketch-based image retrieval and achieved better classification and retrieval performance than many existing methods. PMID:28859140
Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong
2017-06-01
The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.
NASA Astrophysics Data System (ADS)
Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong
2017-06-01
The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.
UltraColor: a new gamut-mapping strategy
NASA Astrophysics Data System (ADS)
Spaulding, Kevin E.; Ellson, Richard N.; Sullivan, James R.
1995-04-01
Many color calibration and enhancement strategies exist for digital systems. Typically, these approaches are optimized to work well with one class of images, but may produce unsatisfactory results for other types of images. For example, a colorimetric strategy may work well when printing photographic scenes, but may give inferior results for business graphic images because of device color gamut limitations. On the other hand, a color enhancement strategy that works well for business graphics images may distort the color reproduction of skintones and other important photographic colors. This paper describes a method for specifying different color mapping strategies in various regions of color space, while providing a mechanism for smooth transitions between the different regions. The method involves a two step process: (1) constraints are applied so some subset of the points in the input color space explicitly specifying the color mapping function; (2) the color mapping for the remainder of the color values is then determined using an interpolation algorithm that preserves continuity and smoothness. The interpolation algorithm that was developed is based on a computer graphics morphing technique. This method was used to develop the UltraColor gamut mapping strategy, which combines a colorimetric mapping for colors with low saturation levels, with a color enhancement technique for colors with high saturation levels. The result is a single color transformation that produces superior quality for all classes of imagery. UltraColor has been incorporated in several models of Kodak printers including the Kodak ColorEase PS and the Kodak XLS 8600 PS thermal dye sublimation printers.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Fraeman, A. A.; Grossman, L.; Herkenhoff, K. E.; Sullivan, R. J.; Mer/Athena Science Team
2010-12-01
The Mars Exploration Rovers Spirit and Opportunity have enabled more than six and a half years of detailed, in situ field study of two specific landing sites and traverse paths within Gusev crater and Meridiani Planum, respectively. Much of the study has relied on high-resolution, multispectral imaging of fine-grained regolith components--the dust, sand, cobbles, clasts, and other components collectively referred to as "soil"--at both sites using the rovers' Panoramic Camera (Pancam) and Microscopic Imager (MI) imaging systems. As of early September 2010, the Pancam systems have acquired more than 1300 and 1000 "13 filter" multispectral imaging sequences of surfaces in Gusev and Meridiani, respectively, with each sequence consisting of co-located images at 11 unique narrowband wavelengths between 430 nm and 1009 nm and having a maximum spatial resolution of about 500 microns per pixel. The MI systems have acquired more than 5900 and 6500 monochromatic images, respectively, at about 31 microns per pixel scale. Pancam multispectral image cubes are calibrated to radiance factor (I/F, where I is the measured radiance and π*F is the incident solar irradiance) using observations of the onboard calibration targets, and then corrected to relative reflectance (assuming Lambertian photometric behavior) for comparison with laboratory rock and mineral measurements. Specifically, Pancam spectra can be used to detect the possible presence of some iron-bearing minerals (e.g., some ferric oxides/oxyhydroxides and pyroxenes) as well as structural water or OH in some hydrated alteration products, providing important inputs on the choice of targets for more quantitative compositional and mineralogic follow-up using the rover's other in situ and remote sensing analysis tools. Pancam 11-band spectra are being analyzed using a variety of standard as well as specifically-tailored analysis methods, including color ratio and band depth parameterizations, spectral similarity and principal components clustering, and simple visual inspection based on correlations with false color unit boundaries and textural variations seen in both Pancam and MI imaging. Approximately 20 distinct spectral classes of fine-grained surface components were identified at each site based on these methods. In this presentation we describe these spectral classes, their geologic and textural context and distribution based on supporting high-res MI and other Pancam imaging, and their potential compositional/mineralogic interpretations based on a variety of rover data sets.
NASA Astrophysics Data System (ADS)
Solli, Martin; Lenz, Reiner
In this paper we describe how to include high level semantic information, such as aesthetics and emotions, into Content Based Image Retrieval. We present a color-based emotion-related image descriptor that can be used for describing the emotional content of images. The color emotion metric used is derived from psychophysical experiments and based on three variables: activity, weight and heat. It was originally designed for single-colors, but recent research has shown that the same emotion estimates can be applied in the retrieval of multi-colored images. Here we describe a new approach, based on the assumption that perceived color emotions in images are mainly affected by homogenous regions, defined by the emotion metric, and transitions between regions. RGB coordinates are converted to emotion coordinates, and for each emotion channel, statistical measurements of gradient magnitudes within a stack of low-pass filtered images are used for finding interest points corresponding to homogeneous regions and transitions between regions. Emotion characteristics are derived for patches surrounding each interest point, and saved in a bag-of-emotions, that, for instance, can be used for retrieving images based on emotional content.
2015-07-23
This image from NASA New Horizons highlights the contrasting appearance of the two worlds: Charon is mostly gray, with a dark reddish polar cap, while Pluto shows a wide variety of subtle color variations. Pluto and Charon are shown in enhanced color in this image, which is the highest-resolution color image of the pair so far returned to Earth by New Horizons. It was taken at 06:49 UT on July 14, 2015, five hours before Pluto closest approach, from a range of 150,000 miles (250,000 kilometers), with the spacecraft's Ralph instrument. The image highlights the contrasting appearance of the two worlds: Charon is mostly gray, with a dark reddish polar cap, while Pluto shows a wide variety of subtle color variations, including yellowish patches on the north polar cap and subtly contrasting colors for the two halves of Pluto's "heart," informally named Tombaugh Regio, seen in the upper right quadrant of the image. In order to fit Pluto and Charon in the same frame in their correct relative positions, the image has been rotated so the north pole on both Pluto and Charon is pointing towards the upper left. The image was made with the blue, red, and near-infrared color filters of Ralph's Multispectral Visible Imaging Camera, and shows colors that are similar, but not identical, to what would be seen with the human eye, which is sensitive to a narrower range of wavelengths. http://photojournal.jpl.nasa.gov/catalog/PIA19856
NASA Astrophysics Data System (ADS)
Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom
2017-03-01
Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better diagnosis. Our results indicate that for diagnostic applications involving both grayscale and color images, CSDF should be chosen over DICOM GSDF and sRGB as it assures excellent detection for color images and at the same time maintains DICOM GSDF for grayscale images.
Halftoning and Image Processing Algorithms
1999-02-01
screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our
Color Image Restoration Using Nonlocal Mumford-Shah Regularizers
NASA Astrophysics Data System (ADS)
Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.
We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
NASA Astrophysics Data System (ADS)
Savino, Michael; Comins, Neil Francis
2015-01-01
The aim of this study is to develop a mathematical algorithm for quantifying the perceived colors of stars as viewed from the surface of the Earth across a wide range of possible atmospheric conditions. These results are then used to generate color-corrected stellar images. As a first step, optics corrections are calculated to adjust for the CCD bias and the transmission curves of any filters used during image collection. Next, corrections for atmospheric scattering and absorption are determined for the atmospheric conditions during imaging by utilizing the Simple Model of the Atmospheric Radiative Transfer of Sunshine (SMARTS). These two sets of corrections are then applied to a series of reference spectra, which are then weighted against the CIE 1931 XYZ color matching functions before being mapped onto the sRGB color space, in order to determine a series of reference colors against which the original image will be compared. Each pixel of the image is then re-colored based upon its closest corresponding reference spectrum so that the final image output closely matches, in color, what would be seen by the human eye above the Earth's atmosphere. By comparing against the reference spectrum, the stellar classification for each star in the image can also be determined. An observational experiment is underway to test the accuracy of these calculations.
NASA Technical Reports Server (NTRS)
1986-01-01
This false-color view of the rings of Uranus was made from images taken by Voyager 2 on Jan. 21, 1986, from a distance of 4.17 million kilometers (2.59 million miles). All nine known rings are visible here; the somewhat fainter, pastel lines seen between them are contributed by the computer enhancement. Six 15-second narrow-angle images were used to extract color information from the extremely dark and faint rings. Two images each in the green, clear and violet filters were added together and averaged to find the proper color differences between the rings. The final image was made from these three color averages and represents an enhanced, false-color view. The image shows that the brightest, or epsilon, ring at top is neutral in color, with the fainter eight other rings showing color differences between them. Moving down, toward Uranus, we see the delta, gamma and eta rings in shades of blue and green; the beta and alpha rings in somewhat lighter tones; and then a final set of three, known simply as the 4, 5 and 6 rings, in faint off-white tones. Scientists will use this color information to try to understand the nature and origin of the ring material. The resolution of this image is approximately 40 km (25 mi). The Voyager project is managed for NASA by the Jet Propulsion Laboratory.
How daylight influences high-order chromatic descriptors in natural images.
Ojeda, Juan; Nieves, Juan Luis; Romero, Javier
2017-07-01
Despite the global and local daylight changes naturally occurring in natural scenes, the human visual system usually adapts quite well to those changes, developing a stable color perception. Nevertheless, the influence of daylight in modeling natural image statistics is not fully understood and has received little attention. The aim of this work was to analyze the influence of daylight changes in different high-order chromatic descriptors (i.e., color volume, color gamut, and number of discernible colors) derived from 350 color images, which were rendered under 108 natural illuminants with Correlated Color Temperatures (CCT) from 2735 to 25,889 K. Results suggest that chromatic and luminance information is almost constant and does not depend on the CCT of the illuminant for values above 14,000 K. Nevertheless, differences between the red-green and blue-yellow image components were found below that CCT, with most of the statistical descriptors analyzed showing local extremes in the range 2950 K-6300 K. Uniform regions and areas of the images attracting observers' attention were also considered in this analysis and were characterized by their patchiness index and their saliency maps. Meanwhile, the results of the patchiness index do not show a clear dependence on CCT, and it is remarkable that a significant reduction in the number of discernible colors (58% on average) was found when the images were masked with their corresponding saliency maps. Our results suggest that chromatic diversity, as defined in terms of the discernible colors, can be strongly reduced when an observer scans a natural scene. These findings support the idea that a reduction in the number of discernible colors will guide visual saliency and attention. Whatever the modeling is mediating the neural representation of natural images, natural image statistics, it is clear that natural image statistics should take into account those local maxima and minima depending on the daylight illumination and the reduction of the number of discernible colors when salient regions are considered.
2014-11-21
The puzzling, fascinating surface of Jupiter icy moon Europa looms large in this newly-reprocessed [sic] color view, made from images taken by NASA Galileo spacecraft in the late 1990s. This is the color view of Europa from Galileo that shows the largest portion of the moon's surface at the highest resolution. The view was previously released as a mosaic with lower resolution and strongly enhanced color (see PIA02590). To create this new version, the images were assembled into a realistic color view of the surface that approximates how Europa would appear to the human eye. The scene shows the stunning diversity of Europa's surface geology. Long, linear cracks and ridges crisscross the surface, interrupted by regions of disrupted terrain where the surface ice crust has been broken up and re-frozen into new patterns. Color variations across the surface are associated with differences in geologic feature type and location. For example, areas that appear blue or white contain relatively pure water ice, while reddish and brownish areas include non-ice components in higher concentrations. The polar regions, visible at the left and right of this view, are noticeably bluer than the more equatorial latitudes, which look more white. This color variation is thought to be due to differences in ice grain size in the two locations. Images taken through near-infrared, green and violet filters have been combined to produce this view. The images have been corrected for light scattered outside of the image, to provide a color correction that is calibrated by wavelength. Gaps in the images have been filled with simulated color based on the color of nearby surface areas with similar terrain types. This global color view consists of images acquired by the Galileo Solid-State Imaging (SSI) experiment on the spacecraft's first and fourteenth orbits through the Jupiter system, in 1995 and 1998, respectively. Image scale is 1 mile (1.6 kilometers) per pixel. North on Europa is at right. http://photojournal.jpl.nasa.gov/catalog/PIA19048
NASA Technical Reports Server (NTRS)
Carneggie, D. M.; Degloria, S. D.; Colwell, R. N.
1975-01-01
A network of sampling sites throughout the annual grassland region of California was established to correlate plant growth stages and forage production to climatic and other environmental factors. Plant growth and range conditions were further related to geographic location and seasonal variations. A sequence of LANDSAT data was obtained covering critical periods in the growth cycle. This was analyzed by both photointerpretation and computer aided techniques. Image characteristics and spectral reflectance data were then related to forage production, range condition, range site and changing growth conditions. It was determined that repeat sequences with LANDSAT color composite images do provide a means for monitoring changes in range condition. Spectral radiance data obtained from magnetic tape can be used to determine quantitatively the critical stages in the forage growth cycle. A computer ratioing technique provided a sensitive indicator of changes in growth stages and an indication of the relative differences in forage production between range sites.
Multiple sequence alignment in HTML: colored, possibly hyperlinked, compact representations.
Campagne, F; Maigret, B
1998-02-01
Protein sequence alignments are widely used in protein structure prediction, protein engineering, modeling of proteins, etc. This type of representation is useful at different stages of scientific activity: looking at previous results, working on a research project, and presenting the results. There is a need to make it available through a network (intranet or WWW), in a way that allows biologists, chemists, and noncomputer specialists to look at the data and carry on research--possibly in a collaborative research. Previous methods (text-based, Java-based) are reported and their advantages are discussed. We have developed two novel approaches to represent the alignments as colored, hyper-linked HTML pages. The first method creates an HTML page that uses efficiently the image cache mechanism of a WWW browser, thereby allowing the user to browse different alignments without waiting for the images to be loaded through the network, but only for the first viewed alignment. The generated pages can be browsed with any HTML2.0-compliant browser. The second method that we propose uses W3C-CSS1-style sheets to render alignments. This new method generates pages that require recent browsers to be viewed. We implemented these methods in the Viseur program and made a WWW service available that allows a user to convert an MSF alignment file in HTML for WWW publishing. The latter service is available at http:@www.lctn.u-nancy.fr/viseur/services.htm l.
Asymmetric color image encryption based on singular value decomposition
NASA Astrophysics Data System (ADS)
Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping
2017-02-01
A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.
Contrast enhancement of bite mark images using the grayscale mixer in ACR in Photoshop®.
Evans, Sam; Noorbhai, Suzanne; Lawson, Zoe; Stacey-Jones, Seren; Carabott, Romina
2013-05-01
Enhanced images may improve bite mark edge definition, assisting forensic analysis. Current contrast enhancement involves color extraction, viewing layered images by channel. A novel technique, producing a single enhanced image using the grayscale mix panel within Adobe Camera Raw®, has been developed and assessed here, allowing adjustments of multiple color channels simultaneously. Stage 1 measured RGB values in 72 versions of a color chart image; eight sliders in Photoshop® were adjusted at 25% intervals, all corresponding colors affected. Stage 2 used a bite mark image, and found only red, orange, and yellow sliders had discernable effects. Stage 3 assessed modality preference between color, grayscale, and enhanced images; on average, the 22 survey participants chose the enhanced image as better defined for nine out of 10 bite marks. The study has shown potential benefits for this new technique. However, further research is needed before use in the analysis of bite marks. © 2013 American Academy of Forensic Sciences.
Significance of perceptually relevant image decolorization for scene classification
NASA Astrophysics Data System (ADS)
Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl
2017-11-01
Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.
Color separation in forensic image processing using interactive differential evolution.
Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb
2015-01-01
Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.
Unsupervised color image segmentation using a lattice algebra clustering technique
NASA Astrophysics Data System (ADS)
Urcid, Gonzalo; Ritter, Gerhard X.
2011-08-01
In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.
Pluto and Charon in False Color Show Compositional Diversity
2015-07-14
This July 13, 2015, image of Pluto and Charon is presented in false colors to make differences in surface material and features easy to see. It was obtained by the Ralph instrument on NASA's New Horizons spacecraft, using three filters to obtain color information, which is exaggerated in the image. These are not the actual colors of Pluto and Charon, and the apparent distance between the two bodies has been reduced for this side-by-side view. The image reveals that the bright heart-shaped region of Pluto includes areas that differ in color characteristics. The western lobe, shaped like an ice-cream cone, appears peach color in this image. A mottled area on the right (east) appears bluish. Even within Pluto's northern polar cap, in the upper part of the image, various shades of yellow-orange indicate subtle compositional differences. The surface of Charon is viewed using the same exaggerated color. The red on the dark northern polar cap of Charon is attributed to hydrocarbon materials including a class of chemical compounds called tholins. The mottled colors at lower latitudes point to the diversity of terrains on Charon. This image was taken at 3:38 a.m. EDT on July 13, one day before New Horizons' closest approach to Pluto. http://photojournal.jpl.nasa.gov/catalog/PIA19707
Semi-automatic mapping for identifying complex geobodies in seismic images
NASA Astrophysics Data System (ADS)
Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid
2017-03-01
Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.
Align and conquer: moving toward plug-and-play color imaging
NASA Astrophysics Data System (ADS)
Lee, Ho J.
1996-03-01
The rapid evolution of the low-cost color printing and image capture markets has precipitated a huge increase in the use of color imagery by casual end users on desktop systems, as opposed to traditional professional color users working with specialized equipment. While the cost of color equipment and software has decreased dramatically, the underlying system-level problems associated with color reproduction have remained the same, and in many cases are more difficult to address in a casual environment than in a professional setting. The proliferation of color imaging technologies so far has resulted in a wide availability of component solutions which work together poorly. A similar situation in the desktop computing market has led to the various `Plug-and-Play' standards, which provide a degree of interoperability between a range of products on disparate computing platforms. This presentation will discuss some of the underlying issues and emerging trends in the desktop and consumer digital color imaging markets.
Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders.
Tapia-McClung, Horacio; Ajuria Ibarra, Helena; Rao, Dinesh
2016-01-01
Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology.
Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders
Ajuria Ibarra, Helena; Rao, Dinesh
2016-01-01
Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology. PMID:27902724
24/7 security system: 60-FPS color EMCCD camera with integral human recognition
NASA Astrophysics Data System (ADS)
Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.
2007-04-01
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
Low illumination color image enhancement based on improved Retinex
NASA Astrophysics Data System (ADS)
Liao, Shujing; Piao, Yan; Li, Bing
2017-11-01
Low illumination color image usually has the characteristics of low brightness, low contrast, detail blur and high salt and pepper noise, which greatly affected the later image recognition and information extraction. Therefore, in view of the degradation of night images, the improved algorithm of traditional Retinex. The specific approach is: First, the original RGB low illumination map is converted to the YUV color space (Y represents brightness, UV represents color), and the Y component is estimated by using the sampling acceleration guidance filter to estimate the background light; Then, the reflection component is calculated by the classical Retinex formula and the brightness enhancement ratio between original and enhanced is calculated. Finally, the color space conversion from YUV to RGB and the feedback enhancement of the UV color component are carried out.
Visualization of dyed NAPL concentration in transparent porous media using color space components.
Kashuk, Sina; Mercurio, Sophia R; Iskander, Magued
2014-07-01
Finding a correlation between image pixel information and non-aqueous phase liquid (NAPL) saturation is an important issue in bench-scale geo-environmental model studies that employ optical imaging techniques. Another concern is determining the best dye color and its optimum concentration as a tracer for use in mapping NAPL zones. Most bench scale flow studies employ monochromatic gray-scale imaging to analyze the concentration of mostly red dyed NAPL tracers in porous media. However, the use of grayscale utilizes a third of the available information in color images, which typically contain three color-space components. In this study, eight color spaces consisting of 24 color-space components were calibrated against dye concentration for three color-dyes. Additionally, multiple color space components were combined to increase the correlation between color-space data and dyed NAPL concentration. This work is performed to support imaging of NAPL migration in transparent synthetic soils representing the macroscopic behavior of natural soils. The transparent soil used in this study consists of fused quartz and a matched refractive index mineral-oil solution that represents the natural aquifer. The objective is to determine the best color dye concentration and ideal color space components for rendering dyed sucrose-saturated fused quartz that represents contamination of the natural aquifer by a dense NAPL (DNAPL). Calibration was achieved for six NAPL zone lengths using 3456 images (24 color space components×3 dyes×48 NAPL combinations) of contaminants within a defined criteria expressed as peak signal to noise ratio. The effect of data filtering was also considered and a convolution average filter is recommended for image conditioning. The technology presented in this paper is fast, accurate, non-intrusive and inexpensive method for quantifying contamination zones using transparent soil models. Copyright © 2014 Elsevier B.V. All rights reserved.
2D to 3D conversion implemented in different hardware
NASA Astrophysics Data System (ADS)
Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli
2015-02-01
Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.
Patterson, Emily J; Wilk, Melissa; Langlo, Christopher S; Kasilian, Melissa; Ring, Michael; Hufnagel, Robert B; Dubis, Adam M; Tee, James J; Kalitzeos, Angelos; Gardner, Jessica C; Ahmed, Zubair M; Sisk, Robert A; Larsen, Michael; Sjoberg, Stacy; Connor, Thomas B; Dubra, Alfredo; Neitz, Jay; Hardcastle, Alison J; Neitz, Maureen; Michaelides, Michel; Carroll, Joseph
2016-07-01
Mutations in the coding sequence of the L and M opsin genes are often associated with X-linked cone dysfunction (such as Bornholm Eye Disease, BED), though the exact color vision phenotype associated with these disorders is variable. We examined individuals with L/M opsin gene mutations to clarify the link between color vision deficiency and cone dysfunction. We recruited 17 males for imaging. The thickness and integrity of the photoreceptor layers were evaluated using spectral-domain optical coherence tomography. Cone density was measured using high-resolution images of the cone mosaic obtained with adaptive optics scanning light ophthalmoscopy. The L/M opsin gene array was characterized in 16 subjects, including at least one subject from each family. There were six subjects with the LVAVA haplotype encoded by exon 3, seven with LIAVA, two with the Cys203Arg mutation encoded by exon 4, and two with a novel insertion in exon 2. Foveal cone structure and retinal thickness was disrupted to a variable degree, even among related individuals with the same L/M array. Our findings provide a direct link between disruption of the cone mosaic and L/M opsin variants. We hypothesize that, in addition to large phenotypic differences between different L/M opsin variants, the ratio of expression of first versus downstream genes in the L/M array contributes to phenotypic diversity. While the L/M opsin mutations underlie the cone dysfunction in all of the subjects tested, the color vision defect can be caused either by the same mutation or a gene rearrangement at the same locus.
Exploring the use of memory colors for image enhancement
NASA Astrophysics Data System (ADS)
Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly
2014-02-01
Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.
Physics-based approach to color image enhancement in poor visibility conditions.
Tan, K K; Oakley, J P
2001-10-01
Degradation of images by the atmosphere is a familiar problem. For example, when terrain is imaged from a forward-looking airborne camera, the atmosphere degradation causes a loss in both contrast and color information. Enhancement of such images is a difficult task because of the complexity in restoring both the luminance and the chrominance while maintaining good color fidelity. One particular problem is the fact that the level of contrast loss depends strongly on wavelength. A novel method is presented for the enhancement of color images. This method is based on the underlying physics of the degradation process, and the parameters required for enhancement are estimated from the image itself.
A new efficient method for color image compression based on visual attention mechanism
NASA Astrophysics Data System (ADS)
Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang
2010-11-01
One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jester, Sebastian; Schneider, Donald P.; Richards, Gordon T.
The author investigates the extent to which the Palomar-Green (PG) Bright Quasar Survey (BQS) is complete and representative of the general quasar population by comparing with imaging and spectroscopy from the Sloan Digital Sky Survey. A comparison of SDSS and PG photometry of both stars and quasars reveals the need to apply a color and magnitude recalibration to the PG data. Using the SDSS photometric catalog, they define the PG's parent sample of objects that are not main-sequence stars and simulate the selection of objects from this parent sample using the PG photometric criteria and errors. This simulation shows thatmore » the effective U-B cut in the PG survey is U-B < -0.71, implying a color-related incompleteness. As the color distribution of bright quasars peaks near U-B = -0.7 and the 2-{sigma} error in U-B is comparable to the full width of the color distribution of quasars, the color incompleteness of the BQS is approximately 50% and essentially random with respect to U-B color for z < 0.5. There is however, a bias against bright quasars at 0.5 < z < 1, which is induced by the color-redshift relation of quasars (although quasars at z > 0.5 are inherently rare in bright surveys in any case). They find no evidence for any other systematic incompleteness when comparing the distributions in color, redshift, and FIRST radio properties of the BQS and a BQS-like subsample of the SDSS quasar sample. However, the application of a bright magnitude limit biases the BQS toward the inclusion of objects which are blue in g-i, in particular compared to the full range of g-i colors found among the i-band limited SDSS quasars, and even at i-band magnitudes comparable to those of the BQS objects.« less
Color image fusion for concealed weapon detection
NASA Astrophysics Data System (ADS)
Toet, Alexander
2003-09-01
Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.
Color constancy in dermatoscopy with smartphone
NASA Astrophysics Data System (ADS)
Cugmas, Blaž; Pernuš, Franjo; Likar, Boštjan
2017-12-01
The recent spread of cheap dermatoscopes for smartphones can empower patients to acquire images of skin lesions on their own and send them to dermatologists. Since images are acquired by different smartphone cameras under unique illumination conditions, the variability in colors is expected. Therefore, the mobile dermatoscopic systems should be calibrated in order to ensure the color constancy in skin images. In this study, we have tested a dermatoscope DermLite DL1 basic, attached to Samsung Galaxy S4 smartphone. Under the controlled conditions, jpeg images of standard color patches were acquired and a model between an unknown device-dependent RGB and a deviceindependent Lab color space has been built. Results showed that median and the best color error was 7.77 and 3.94, respectively. Results are in the range of a human eye detection capability (color error ≈ 4) and video and printing industry standards (color error is expected to be between 5 and 6). It can be concluded that a calibrated smartphone dermatoscope can provide sufficient color constancy and can serve as an interesting opportunity to bring dermatologists closer to the patients.
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
77 FR 25082 - Picture Permit Imprint Indicia
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-27
... customers to include business-related color images, such as corporate logos, company brand or trademarks, in... including a business-related color image within the permit imprint indicia. When tested, indicia placed in the upper right corner of the mailpiece that contained color images did not impede the Postal Service...
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
Surface-Plasmon Holography with White-Light Illumination
NASA Astrophysics Data System (ADS)
Ozaki, Miyu; Kato, Jun-ichi; Kawata, Satoshi
2011-04-01
The recently emerging three-dimensional (3D) displays in the electronic shops imitate depth illusion by overlapping two parallax 2D images through either polarized glasses that viewers are required to wear or lenticular lenses fixed directly on the display. Holography, on the other hand, provides real 3D imaging, although usually limiting colors to monochrome. The so-called rainbow holograms—mounted, for example, on credit cards—are also produced from parallax images that change color with viewing angle. We report on a holographic technique based on surface plasmons that can reconstruct true 3D color images, where the colors are reconstructed by satisfying resonance conditions of surface plasmon polaritons for individual wavelengths. Such real 3D color images can be viewed from any angle, just like the original object.