Sample records for color image coding

  1. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  2. Pseudo color ghost coding imaging with pseudo thermal light

    NASA Astrophysics Data System (ADS)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  3. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  4. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Color-coded fluid-attenuated inversion recovery images improve inter-rater reliability of fluid-attenuated inversion recovery signal changes within acute diffusion-weighted image lesions.

    PubMed

    Kim, Bum Joon; Kim, Yong-Hwan; Kim, Yeon-Jung; Ahn, Sung Ho; Lee, Deok Hee; Kwon, Sun U; Kim, Sang Joon; Kim, Jong S; Kang, Dong-Wha

    2014-09-01

    Diffusion-weighted image fluid-attenuated inversion recovery (FLAIR) mismatch has been considered to represent ischemic lesion age. However, the inter-rater agreement of diffusion-weighted image FLAIR mismatch is low. We hypothesized that color-coded images would increase its inter-rater agreement. Patients with ischemic stroke <24 hours of a clear onset were retrospectively studied. FLAIR signal change was rated as negative, subtle, or obvious on conventional and color-coded FLAIR images based on visual inspection. Inter-rater agreement was evaluated using κ and percent agreement. The predictive value of diffusion-weighted image FLAIR mismatch for identification of patients <4.5 hours of symptom onset was evaluated. One hundred and thirteen patients were enrolled. The inter-rater agreement of FLAIR signal change improved from 69.9% (k=0.538) with conventional images to 85.8% (k=0.754) with color-coded images (P=0.004). Discrepantly rated patients on conventional, but not on color-coded images, had a higher prevalence of cardioembolic stroke (P=0.02) and cortical infarction (P=0.04). The positive predictive value for patients <4.5 hours of onset was 85.3% and 71.9% with conventional and 95.7% and 82.1% with color-coded images, by each rater. Color-coded FLAIR images increased the inter-rater agreement of diffusion-weighted image FLAIR recovery mismatch and may ultimately help identify unknown-onset stroke patients appropriate for thrombolysis. © 2014 American Heart Association, Inc.

  6. Use of fluorescent proteins and color-coded imaging to visualize cancer cells with different genetic properties.

    PubMed

    Hoffman, Robert M

    2016-03-01

    Fluorescent proteins are very bright and available in spectrally-distinct colors, enable the imaging of color-coded cancer cells growing in vivo and therefore the distinction of cancer cells with different genetic properties. Non-invasive and intravital imaging of cancer cells with fluorescent proteins allows the visualization of distinct genetic variants of cancer cells down to the cellular level in vivo. Cancer cells with increased or decreased ability to metastasize can be distinguished in vivo. Gene exchange in vivo which enables low metastatic cancer cells to convert to high metastatic can be color-coded imaged in vivo. Cancer stem-like and non-stem cells can be distinguished in vivo by color-coded imaging. These properties also demonstrate the vast superiority of imaging cancer cells in vivo with fluorescent proteins over photon counting of luciferase-labeled cancer cells.

  7. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  8. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  9. Parametric color coding of digital subtraction angiography.

    PubMed

    Strother, C M; Bender, F; Deuerling-Zheng, Y; Royalty, K; Pulfer, K A; Baumgart, J; Zellerhoff, M; Aagaard-Kienitz, B; Niemann, D B; Lindstrom, M L

    2010-05-01

    Color has been shown to facilitate both visual search and recognition tasks. It was our purpose to examine the impact of a color-coding algorithm on the interpretation of 2D-DSA acquisitions by experienced and inexperienced observers. Twenty-six 2D-DSA acquisitions obtained as part of routine clinical care from subjects with a variety of cerebrovascular disease processes were selected from an internal data base so as to include a variety of disease states (aneurysms, AVMs, fistulas, stenosis, occlusions, dissections, and tumors). Three experienced and 3 less experienced observers were each shown the acquisitions on a prerelease version of a commercially available double-monitor workstation (XWP, Siemens Healthcare). Acquisitions were presented first as a subtracted image series and then as a single composite color-coded image of the entire acquisition. Observers were then asked a series of questions designed to assess the value of the color-coded images for the following purposes: 1) to enhance their ability to make a diagnosis, 2) to have confidence in their diagnosis, 3) to plan a treatment, and 4) to judge the effect of a treatment. The results were analyzed by using 1-sample Wilcoxon tests. Color-coded images enhanced the ease of evaluating treatment success in >40% of cases (P < .0001). They also had a statistically significant impact on treatment planning, making planning easier in >20% of the cases (P = .0069). In >20% of the examples, color-coding made diagnosis and treatment planning easier for all readers (P < .0001). Color-coding also increased the confidence of diagnosis compared with the use of DSA alone (P = .056). The impact of this was greater for the naïve readers than for the expert readers. At no additional cost in x-ray dose or contrast medium, color-coding of DSA enhanced the conspicuity of findings on DSA images. It was particularly useful in situations in which there was a complex flow pattern and in evaluation of pre- and posttreatment acquisitions. Its full potential remains to be defined.

  10. Accessible and informative sectioned images, color-coded images, and surface models of the ear.

    PubMed

    Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo

    2013-08-01

    In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.

  11. Pseudo-color coding method for high-dynamic single-polarization SAR images

    NASA Astrophysics Data System (ADS)

    Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi

    2018-04-01

    A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.

  12. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  13. The implementation of thermal image visualization by HDL based on pseudo-color

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Zhang, JiangLing

    2004-11-01

    The pseudo-color method which maps the sampled data to intuitive perception colors is a kind of powerful visualization way. And the all-around system of pseudo-color visualization, which includes the primary principle, model and HDL (Hardware Description Language) implementation for the thermal images, is expatiated on in the paper. The thermal images whose signal is modulated as video reflect the temperature distribution of measured object, so they have the speciality of mass and real-time. The solution to the intractable problem is as follows: First, the reasonable system, i.e. the combining of global pseudo-color visualization and local special area accurate measure, muse be adopted. Then, the HDL pseudo-color algorithms in SoC (System on Chip) carry out the system to ensure the real-time. Finally, the key HDL algorithms for direct gray levels connection coding, proportional gray levels map coding and enhanced gray levels map coding are presented, and its simulation results are showed. The pseudo-color visualization of thermal images implemented by HDL in the paper has effective application in the aspect of electric power equipment test and medical health diagnosis.

  14. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  15. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018

  16. A coded structured light system based on primary color stripe projection and monochrome imaging.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  17. Effect of color coding and subtraction on the accuracy of contrast echocardiography

    NASA Technical Reports Server (NTRS)

    Pasquet, A.; Greenberg, N.; Brunken, R.; Thomas, J. D.; Marwick, T. H.

    1999-01-01

    BACKGROUND: Contrast echocardiography may be used to assess myocardial perfusion. However, gray scale assessment of myocardial contrast echocardiography (MCE) is difficult because of variations in regional backscatter intensity, difficulties in distinguishing varying shades of gray, and artifacts or attenuation. We sought to determine whether the assessment of rest myocardial perfusion by MCE could be improved with subtraction and color coding. METHODS AND RESULTS: MCE was performed in 31 patients with previous myocardial infarction with a 2nd generation agent (NC100100, Nycomed AS), using harmonic triggered or continuous imaging and gain settings were kept constant throughout the study. Digitized images were post processed by subtraction of baseline from contrast data and colorized to reflect the intensity of myocardial contrast. Gray scale MCE alone, MCE images combined with baseline and subtracted colorized images were scored independently using a 16 segment model. The presence and severity of myocardial contrast abnormalities were compared with perfusion defined by rest MIBI-SPECT. Segments that were not visualized by continuous (17%) or triggered imaging (14%) after color processing were excluded from further analysis. The specificity of gray scale MCE alone (56%) or MCE combined with baseline 2D (47%) was significantly enhanced by subtraction and color coding (76%, p<0.001) of triggered images. The accuracy of the gray scale approaches (respectively 52% and 47%) was increased to 70% (p<0.001). Similarly, for continuous images, the specificity of gray scale MCE with and without baseline comparison was 23% and 42% respectively, compared with 60% after post processing (p<0.001). The accuracy of colorized images (59%) was also significantly greater than gray scale MCE (43% and 29%, p<0.001). The sensitivity of MCE for both acquisitions was not altered by subtraction. CONCLUSION: Post-processing with subtraction and color coding significantly improves the accuracy and specificity of MCE for detection of perfusion defects.

  18. Semiannual status report

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The work performed in the previous six months can be divided into three main cases: (1) transmission of images over local area networks (LAN's); (2) coding of color mapped (pseudo-color) images; and (3) low rate video coding. A brief overview of the work done in the first two areas is presented. The third item is reported in somewhat more detail.

  19. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  20. Evaluation of a color-coded Landsat 5/6 ratio image for mapping lithologic differences in western South Dakota

    USGS Publications Warehouse

    Raines, Gary L.; Bretz, R.F.; Shurr, George W.

    1979-01-01

    From analysis of a color-coded Landsat 5/6 ratio, image, a map of the vegetation density distribution has been produced by Raines of 25,000 sq km of western South Dakota. This 5/6 ratio image is produced digitally calculating the ratios of the bands 5 and 6 of the Landsat data and then color coding these ratios in an image. Bretz and Shurr compared this vegetation density map with published and unpublished data primarily of the U.S. Geological Survey and the South Dakota Geological Survey; good correspondence is seen between this map and existing geologic maps, especially with the soils map. We believe that this Landsat ratio image can be used as a tool to refine existing maps of surficial geology and bedrock, where bedrock is exposed, and to improve mapping accuracy in areas of poor exposure common in South Dakota. In addition, this type of image could be a useful, additional tool in mapping areas that are unmapped.

  1. Object knowledge changes visual appearance: semantic effects on color afterimages.

    PubMed

    Lupyan, Gary

    2015-10-01

    According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Color-coded visualization of magnetic resonance imaging multiparametric maps

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.

  3. Information retrieval based on single-pixel optical imaging with quick-response code

    NASA Astrophysics Data System (ADS)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  4. Genetic Recombination Between Stromal and Cancer Cells Results in Highly Malignant Cells Identified by Color-Coded Imaging in a Mouse Lymphoma Model.

    PubMed

    Nakamura, Miki; Suetsugu, Atsushi; Hasegawa, Kousuke; Matsumoto, Takuro; Aoki, Hitomi; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Hoffman, Robert M

    2017-12-01

    The tumor microenvironment (TME) promotes tumor growth and metastasis. We previously established the color-coded EL4 lymphoma TME model with red fluorescent protein (RFP) expressing EL4 implanted in transgenic C57BL/6 green fluorescent protein (GFP) mice. Color-coded imaging of the lymphoma TME suggested an important role of stromal cells in lymphoma progression and metastasis. In the present study, we used color-coded imaging of RFP-lymphoma cells and GFP stromal cells to identify yellow-fluorescent genetically recombinant cells appearing only during metastasis. The EL4-RFP lymphoma cells were injected subcutaneously in C57BL/6-GFP transgenic mice and formed subcutaneous tumors 14 days after cell transplantation. The subcutaneous tumors were harvested and transplanted to the abdominal cavity of nude mice. Metastases to the liver, perigastric lymph node, ascites, bone marrow, and primary tumor were imaged. In addition to EL4-RFP cells and GFP-host cells, genetically recombinant yellow-fluorescent cells, were observed only in the ascites and bone marrow. These results indicate genetic exchange between the stromal and cancer cells. Possible mechanisms of genetic exchange are discussed as well as its ramifications for metastasis. J. Cell. Biochem. 118: 4216-4221, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  6. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  7. Identification of the inflow zone of unruptured cerebral aneurysms: comparison of 4D flow MRI and 3D TOF MRA data.

    PubMed

    Futami, K; Sano, H; Misaki, K; Nakada, M; Ueda, F; Hamada, J

    2014-07-01

    The hemodynamics of the inflow zone of cerebral aneurysms may be a key factor in coil compaction and recanalization after endovascular coil embolization. We performed 4D flow MR imaging in conjunction with 3D TOF MRA and compared their ability to identify the inflow zone of unruptured cerebral aneurysms. This series comprised 50 unruptured saccular cerebral aneurysms in 44 patients. Transluminal color-coded 3D MRA images were created by selecting the signal-intensity ranges on 3D TOF MRA images that corresponded with both the luminal margin and the putative inflow. 4D flow MR imaging demonstrated the inflow zone and yielded inflow velocity profiles for all 50 aneurysms. In 18 of 24 lateral-projection aneurysms (75%), the inflow zone was located distally on the aneurysmal neck. The maximum inflow velocity ranged from 285 to 922 mm/s. On 4D flow MR imaging and transluminal color-coded 3D MRA studies, the inflow zone of 32 aneurysms (64%) was at a similar location. In 91% of aneurysms whose neck section plane angle was <30° with respect to the imaging section direction on 3D TOF MRA, depiction of the inflow zone was similar on transluminal color-coded 3D MRA and 4D flow MR images. 4D flow MR imaging can demonstrate the inflow zone and provide inflow velocity profiles. In aneurysms whose angle of the neck-section plane is obtuse vis-a-vis the imaging section on 3D TOF MRA scans, transluminal color-coded 3D MRA may depict the inflow zone reliably. © 2014 by American Journal of Neuroradiology.

  8. False Color Terrain Model of Phoenix Workspace

    NASA Image and Video Library

    2008-05-28

    This is a terrain model of Phoenix Robotic Arm workspace. It has been color coded by depth with a lander model for context. The model has been derived using images from the depth perception feature from Phoenix Surface Stereo Imager SSI.

  9. Visualization and Analysis of Microtubule Dynamics Using Dual Color-Coded Display of Plus-End Labels

    PubMed Central

    Garrison, Amy K.; Xia, Caihong; Wang, Zheng; Ma, Le

    2012-01-01

    Investigating spatial and temporal control of microtubule dynamics in live cells is critical to understanding cell morphogenesis in development and disease. Tracking fluorescently labeled plus-end-tracking proteins over time has become a widely used method to study microtubule assembly. Here, we report a complementary approach that uses only two images of these labels to visualize and analyze microtubule dynamics at any given time. Using a simple color-coding scheme, labeled plus-ends from two sequential images are pseudocolored with different colors and then merged to display color-coded ends. Based on object recognition algorithms, these colored ends can be identified and segregated into dynamic groups corresponding to four events, including growth, rescue, catastrophe, and pause. Further analysis yields not only their spatial distribution throughout the cell but also provides measurements such as growth rate and direction for each labeled end. We have validated the method by comparing our results with ground-truth data derived from manual analysis as well as with data obtained using the tracking method. In addition, we have confirmed color-coded representation of different dynamic events by analyzing their history and fate. Finally, we have demonstrated the use of the method to investigate microtubule assembly in cells and provided guidance in selecting optimal image acquisition conditions. Thus, this simple computer vision method offers a unique and quantitative approach to study spatial regulation of microtubule dynamics in cells. PMID:23226282

  10. Color-Coded Clues to Composition Superimposed on Martian Seasonal-Flow Image

    NASA Image and Video Library

    2014-02-10

    This image from NASA Mar Reconnaissance Orbiter combines a photograph of seasonal dark flows on a Martian slope at Palikir Crater with a grid of colors based on data collected by a mineral-mapping spectrometer observing the same area.

  11. Color-coded duplex sonography for diagnosis of testicular torsion.

    PubMed

    Zoeller, G; Ringert, R H

    1991-11-01

    By color-coded duplex sonography moving structures are visualized as red or blue colors within a normal gray-scale B-mode ultrasound image. Thus, blood flow even within small vessels can be visualized clearly. Color-coded duplex sonographic examination was performed in 11 patients who presented with scrotal pain. This method proved to be reliable to differentiate between testicular torsion and testicular inflammation. By clearly demonstrating a lack of intratesticular blood flow in testicular torsion, while avoiding flow in scrotal skin vessels being misinterpreted as intratesticular blood flow, this method significantly decreases the number of patients in whom surgical evaluation is necessary to exclude testicular torsion.

  12. Coastal Zone Color Scanner (CZCS): Imagery of near-surface phytoplankton pigment concentrations from the first coastal ocean dynamics experiment (CODE-1), March - July 1981

    NASA Technical Reports Server (NTRS)

    Abbott, M. R.; Zion, P. M.

    1984-01-01

    As part of the first Coastal Ocean Dynamics Experiment, images of ocean color were collected from late March until late July, 1981, by the Coastal Zone Color Scanner aboard Nimbus-7. Images that had sufficient cloud-free area to be of interest were processed to yield near-surface phytoplankton pigment concentrations. These images were then remapped to a fixed equal-area grid. This report contains photographs of the digital images and a brief description of the processing methods.

  13. Pixel Statistical Analysis of Diabetic vs. Non-diabetic Foot-Sole Spectral Terahertz Reflection Images

    NASA Astrophysics Data System (ADS)

    Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.

    2018-03-01

    In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.

  14. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  15. Quantum image pseudocolor coding based on the density-stratified method

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  16. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  17. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  18. A novel quantum steganography scheme for color images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande

    In quantum image steganography, embedding capacity and security are two important issues. This paper presents a novel quantum steganography scheme using color images as cover images. First, the secret information is divided into 3-bit segments, and then each 3-bit segment is embedded into the LSB of one color pixel in the cover image according to its own value and using Gray code mapping rules. Extraction is the inverse of embedding. We designed the quantum circuits that implement the embedding and extracting process. The simulation results on a classical computer show that the proposed scheme outperforms several other existing schemes in terms of embedding capacity and security.

  19. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  20. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  1. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  2. Digitizing zone maps, using modified LARSYS program. [computer graphics and computer techniques for mapping

    NASA Technical Reports Server (NTRS)

    Giddings, L.; Boston, S.

    1976-01-01

    A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.

  3. Advanced Mail Systems Scanner Technology. Executive Summary and Appendixes A-E.

    DTIC Science & Technology

    1980-10-01

    data base. 6. Perform color acquisition studies. 7. Investigate address and bar code reading. MASS MEMORY TECHNOLOGY 1. Collect performance data on...area of the 1728-by-2200 ICAS image memory and to transmit the data to any of the three color memories of the Comtal. Function table information can...for printing color images. The software allows the transmission of data from the ICAS frame-store memory via the MCU to the Dicomed. Software test

  4. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  5. Apparent Brightness and Topography Images of Vibidia Crater

    NASA Image and Video Library

    2012-03-09

    The left-hand image from NASA Dawn spacecraft shows the apparent brightness of asteroid Vesta surface. The right-hand image is based on this apparent brightness image, with a color-coded height representation of the topography overlain onto it.

  6. World Globes, Shaded Relief and Colored Height

    NASA Image and Video Library

    2003-08-21

    These images of the world were generated with data from the Shuttle Radar Topography Mission (SRTM). The SRTM Project has recently released a new global data set called SRTM30, where the original one arcsecond of latitude and longitude resolution (about 30 meters, or 98 feet, at the equator) was reduced to 30 arcseconds (about 928 meters, or 1496 feet.) These images were created from that data set and show the Earth as it would be viewed from a point in space centered over the Americas, Africa and the western Pacific. Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations. Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. http://photojournal.jpl.nasa.gov/catalog/PIA03394

  7. A flower image retrieval method based on ROI feature.

    PubMed

    Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan

    2004-07-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  8. Color inference in visual communication: the meaning of colors in recycling.

    PubMed

    Schloss, Karen B; Lessard, Laurent; Walmsley, Charlotte S; Foley, Kathleen

    2018-01-01

    People interpret abstract meanings from colors, which makes color a useful perceptual feature for visual communication. This process is complicated, however, because there is seldom a one-to-one correspondence between colors and meanings. One color can be associated with many different concepts (one-to-many mapping) and many colors can be associated with the same concept (many-to-one mapping). We propose that to interpret color-coding systems, people perform assignment inference to determine how colors map onto concepts. We studied assignment inference in the domain of recycling. Participants saw images of colored but unlabeled bins and were asked to indicate which bins they would use to discard different kinds of recyclables and trash. In Experiment 1, we tested two hypotheses for how people perform assignment inference. The local assignment hypothesis predicts that people simply match objects with their most strongly associated color. The global assignment hypothesis predicts that people also account for the association strengths between all other objects and colors within the scope of the color-coding system. Participants discarded objects in bins that optimized the color-object associations of the entire set, which is consistent with the global assignment hypothesis. This sometimes resulted in discarding objects in bins whose colors were weakly associated with the object, even when there was a stronger associated option available. In Experiment 2, we tested different methods for encoding color-coding systems and found that people were better at assignment inference when color sets simultaneously maximized the association strength between assigned color-object parings while minimizing associations between unassigned pairings. Our study provides an approach for designing intuitive color-coding systems that facilitate communication through visual media such as graphs, maps, signs, and artifacts.

  9. A distributed code for color in natural scenes derived from center-surround filtered cone signals

    PubMed Central

    Kellner, Christian J.; Wachtler, Thomas

    2013-01-01

    In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289

  10. The research on multi-projection correction based on color coding grid array

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu

    2017-10-01

    There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.

  11. An interactive method for digitizing zone maps

    NASA Technical Reports Server (NTRS)

    Giddings, L. E.; Thompson, E. J.

    1975-01-01

    A method is presented for digitizing maps that consist of zones, such as contour or climatic zone maps. A color-coded map is prepared by any convenient process. The map is then read into memory of an Image 100 computer by means of its table scanner, using colored filters. Zones are separated and stored in themes, using standard classification procedures. Thematic data are written on magnetic tape and these data, appropriately coded, are combined to make a digitized image on tape. Step-by-step procedures are given for digitization of crop moisture index maps with this procedure. In addition, a complete example of the digitization of a climatic zone map is given.

  12. Digital visual communications using a Perceptual Components Architecture

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  13. Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.

    PubMed

    McGregor, T J; Spence, D J; Coutts, D W

    2008-01-01

    We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.

  14. Water Around a Carbon Star

    NASA Image and Video Library

    2010-09-01

    This ESA Herschel image shows IRC+10216, also known as CW Leonis, a star rich in carbon where astronomers were surprised to find water. This color-coded image shows the star, surrounded by a clumpy envelope of dust.

  15. A Valentine from Vesta

    NASA Image and Video Library

    2012-02-14

    This image from NASA Dawn spacecraft, is based on a framing camera image that is overlain by a color-coded height representation of topography. This heart-shaped hollow is roughly 10 kilometers 6 miles across at its widest point.

  16. Advanced imaging techniques in brain tumors

    PubMed Central

    2009-01-01

    Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287

  17. Mars Atmospheric Temperature and Dust Storm Tracking

    NASA Image and Video Library

    2016-06-09

    This graphic overlays Martian atmospheric temperature data as curtains over an image of Mars taken during a regional dust storm. The temperature profiles extend from the surface to about 50 miles (80 kilometers) up. Temperatures are color coded, ranging from minus 243 degrees Fahrenheit (minus 153 degrees Celsius) where coded purple to minus 9 F (minus 23 C) where coded red. The temperature data and global image were both recorded on Oct. 18, 2014, by instruments on NASA's Mars Reconnaissance Orbiter: Mars Climate Sounder and Mars Color Imager. On that day a regional dust storm was active in the Acidalia Planitia region of northern Mars, at the upper center of this image. A storm from this area in typically travels south and grows into a large regional storm in the southern hemisphere during southern spring. That type of southern-spring storm and two other large regional dust storms repeat as a three-storm series most Martian years. The pattern has been identified from their effects on atmospheric temperature in a layer about 16 miles (25 kilometers) above the surface. http://photojournal.jpl.nasa.gov/catalog/PIA20747

  18. Quantum image encryption based on restricted geometric and color transformations

    NASA Astrophysics Data System (ADS)

    Song, Xian-Hua; Wang, Shen; Abd El-Latif, Ahmed A.; Niu, Xia-Mu

    2014-08-01

    A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.

  19. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  20. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  1. Topographic Ceres Map With Crater Names

    NASA Image and Video Library

    2015-07-28

    This color-coded map from NASA Dawn mission shows the highs and lows of topography on the surface of dwarf planet Ceres. It is labeled with names of features approved by the International Astronomical Union. Occator, the mysterious crater containing Ceres' mysterious bright spots, is named after the Roman agriculture deity of harrowing, a method of leveling soil. They retain their bright appearance in this map, although they are color-coded in the same green elevation of the crater floor in which they sit. The color scale extends about 5 miles (7.5 kilometers) below the surface in indigo to 5 miles (7.5 kilometers) above the surface in white. The topographic map was constructed from analyzing images from Dawn's framing camera taken from varying sun and viewing angles. The map was combined with an image mosaic of Ceres and projected as an simple cylindrical projection. http://photojournal.jpl.nasa.gov/catalog/PIA19606

  2. Multiple Instruments Used for Mars Carbon Estimate

    NASA Image and Video Library

    2015-09-02

    Researchers estimating the amount of carbon held in the ground at the largest known carbonate-containing deposit on Mars utilized data from three different NASA Mars orbiters. Each image in this pair covers the same area about 36 miles (58 kilometers) wide in the Nili Fossae plains region of Mars' northern hemisphere. The tally of carbon content in the rocks of this region is a key piece in solving a puzzle of how the Martian atmosphere has changed over time. Carbon dioxide from the atmosphere on early Mars reacted with surface rocks to form carbonate, thinning the atmosphere. The image on the left presents data from the Thermal Emission Imaging System (THEMIS) instrument on NASA's Mars Odyssey orbiter. The color coding indicates thermal inertia -- the property of how quickly a surface material heats up or cools off. Sand, for example (blue hues), cools off quicker after sundown than bedrock (red hues) does. The color coding in the image on the right presents data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) instrument on NASA's Mars Reconnaissance Orbiter. From the brightness at many different wavelengths, CRISM data can indicate what minerals are present on the surface. In the color coding used here, green hues are consistent with carbonate-bearing materials, while brown or yellow hues are olivine-bearing sands and locations with purple hues are basaltic in composition. The gray scale base map is a mosaic of daytime THEMIS infrared images. Annotations point to areas with different surface compositions. The scale bar indicates 20 kilometers (12.4 miles). http://photojournal.jpl.nasa.gov/catalog/PIA19816

  3. QR images: optimized image embedding in QR codes.

    PubMed

    Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P

    2014-07-01

    This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.

  4. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).

  5. Secure information display with limited viewing zone by use of multi-color visual cryptography.

    PubMed

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2004-04-05

    We propose a display technique that ensures security of visual information by use of visual cryptography. A displayed image appears as a completely random pattern unless viewed through a decoding mask. The display has a limited viewing zone with the decoding mask. We have developed a multi-color encryption code set. Eight colors are represented in combinations of a displayed image composed of red, green, blue, and black subpixels and a decoding mask composed of transparent and opaque subpixels. Furthermore, we have demonstrated secure information display by use of an LCD panel.

  6. Format preferences of district attorneys for post-mortem medical imaging reports: understandability, cost effectiveness, and suitability for the courtroom: a questionnaire based study.

    PubMed

    Ampanozi, Garyfalia; Zimmermann, David; Hatch, Gary M; Ruder, Thomas D; Ross, Steffen; Flach, Patricia M; Thali, Michael J; Ebert, Lars C

    2012-05-01

    The objective of this study was to explore the perception of the legal authorities regarding different report types and visualization techniques for post-mortem radiological findings. A standardized digital questionnaire was developed and the district attorneys in the catchment area of the affiliated Forensic Institute were requested to evaluate four different types of forensic imaging reports based on four cases examples. Each case was described in four different report types (short written report only, gray-scale CT image with figure caption, color-coded CT image with figure caption, 3D-reconstruction with figure caption). The survey participants were asked to evaluate those types of reports regarding understandability, cost effectiveness and overall appropriateness for the courtroom. 3D reconstructions and color-coded CT images accompanied by written report were preferred regarding understandability and cost/effectiveness. 3D reconstructions of the forensic findings reviewed as most adequate for court. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  8. Kepler's Supernova Remnant: A View from Chandra X-Ray Observatory

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    [figure removed for brevity, see original site] Figure 1

    Each top panel in the composite above shows the entire remnant. Each color in the composite represents a different region of the electromagnetic spectrum, from X-rays to infrared light. The X-ray and infrared data cannot be seen with the human eye. Astronomers have color-coded those data so they can be seen in these images.

    The bottom panels are close-up views of the remnant. In the bottom, center image, Hubble sees fine details in the brightest, densest areas of gas. The region seen in these images is outlined in the top, center panel.

    The images indicate that the bubble of gas that makes up the supernova remnant appears different in various types of light. Chandra reveals the hottest gas [colored blue and colored green], which radiates in X-rays. The blue color represents the higher-energy gas; the green, the lower-energy gas. Hubble shows the brightest, densest gas [colored yellow], which appears in visible light. Spitzer unveils heated dust [colored red], which radiates in infrared light.

  9. Color-coded automated signal intensity curves for detection and characterization of breast lesions: preliminary evaluation of a new software package for integrated magnetic resonance-based breast imaging.

    PubMed

    Pediconi, Federica; Catalano, Carlo; Venditti, Fiammetta; Ercolani, Mauro; Carotenuto, Luigi; Padula, Simona; Moriconi, Enrica; Roselli, Antonella; Giacomelli, Laura; Kirchin, Miles A; Passariello, Roberto

    2005-07-01

    The objective of this study was to evaluate the value of a color-coded automated signal intensity curve software package for contrast-enhanced magnetic resonance mammography (CE-MRM) in patients with suspected breast cancer. Thirty-six women with suspected breast cancer based on mammographic and sonographic examinations were preoperatively evaluated on CE-MRM. CE-MRM was performed on a 1.5-T magnet using a 2D Flash dynamic T1-weighted sequence. A dosage of 0.1 mmol/kg of Gd-BOPTA was administered at a flow rate of 2 mL/s followed by 10 mL of saline. Images were analyzed with the new software package and separately with a standard display method. Statistical comparison was performed of the confidence for lesion detection and characterization with the 2 methods and of the diagnostic accuracy for characterization compared with histopathologic findings. At pathology, 54 malignant lesions and 14 benign lesions were evaluated. All 68 (100%) lesions were detected with both methods and good correlation with histopathologic specimens was obtained. Confidence for both detection and characterization was significantly (P < or = 0.025) better with the color-coded method, although no difference (P > 0.05) between the methods was noted in terms of the sensitivity, specificity, and overall accuracy for lesion characterization. Excellent agreement between the 2 methods was noted for both the determination of lesion size (kappa = 0.77) and determination of SI/T curves (kappa = 0.85). The novel color-coded signal intensity curve software allows lesions to be visualized as false color maps that correspond to conventional signal intensity time curves. Detection and characterization of breast lesions with this method is quick and easily interpretable.

  10. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  11. Color image analysis of contaminants and bacteria transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric

    1997-10-01

    Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.

  12. Just Noticeable Distortion Model and Its Application in Color Image Watermarking

    NASA Astrophysics Data System (ADS)

    Liu, Kuo-Cheng

    In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.

  13. Internal Carotid Artery Hypoplasia: Role of Color-Coded Carotid Duplex Sonography.

    PubMed

    Chen, Pei-Ya; Liu, Hung-Yu; Lim, Kun-Eng; Lin, Shinn-Kuang

    2015-10-01

    The purpose of this study was to determine the role of color-coded carotid duplex sonography for diagnosis of internal carotid artery hypoplasia. We retrospectively reviewed 25,000 color-coded carotid duplex sonograms in our neurosonographic database to establish more diagnostic criteria for internal carotid artery hypoplasia. A definitive diagnosis of internal carotid artery hypoplasia was made in 9 patients. Diagnostic findings on color-coded carotid duplex imaging include a long segmental small-caliber lumen (52% diameter) with markedly decreased flow (13% flow volume) in the affected internal carotid artery relative to the contralateral side but without intraluminal lesions. Indirect findings included markedly increased total flow volume (an increase of 133%) in both vertebral arteries, antegrade ipsilateral ophthalmic arterial flow, and a reduced vessel diameter with increased flow resistance in the ipsilateral common carotid artery. Ten patients with distal internal carotid artery dissection showed a similar color-coded duplex pattern, but the reductions in the internal and common carotid artery diameters and increase in collateral flow from the vertebral artery were less prominent than those in hypoplasia. The ipsilateral ophthalmic arterial flow was retrograde in 40% of patients with distal internal carotid artery dissection. In addition, thin-section axial and sagittal computed tomograms of the skull base could show the small diameter of the carotid canal in internal carotid artery hypoplasia and help distinguish hypoplasia from distal internal carotid artery dissection. Color-coded carotid duplex sonography provides important clues for establishing a diagnosis of internal carotid artery hypoplasia. A hypoplastic carotid canal can be shown by thin-section axial and sagittal skull base computed tomography to confirm the final diagnosis. © 2015 by the American Institute of Ultrasound in Medicine.

  14. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  15. Color-coded depth information in volume-rendered magnetic resonance angiography

    NASA Astrophysics Data System (ADS)

    Smedby, Orjan; Edsborg, Karin; Henriksson, John

    2004-05-01

    Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) data are usually presented using Maximum Intensity Projection (MIP) or Volume Rendering Technique (VRT), but these often fail to demonstrate a stenosis if the projection angle is not suitably chosen. In order to make vascular stenoses visible in projection images independent of the choice of viewing angle, a method is proposed to supplement these images with colors representing the local caliber of the vessel. After preprocessing the volume image with a median filter, segmentation is performed by thresholding, and a Euclidean distance transform is applied. The distance to the background from each voxel in the vessel is mapped to a color. These colors can either be rendered directly using MIP or be presented together with opacity information based on the original image using VRT. The method was tested in a synthetic dataset containing a cylindrical vessel with stenoses in varying angles. The results suggest that the visibility of stenoses is enhanced by the color information. In clinical feasibility experiments, the technique was applied to clinical MRA data. The results are encouraging and indicate that the technique can be used with clinical images.

  16. Algorithm Science to Operations for the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Visible/Infrared Imager/Radiometer Suite (VIIRS)

    NASA Technical Reports Server (NTRS)

    Duda, James L.; Barth, Suzanna C

    2005-01-01

    The VIIRS sensor provides measurements for 22 Environmental Data Records (EDRs) addressing the atmosphere, ocean surface temperature, ocean color, land parameters, aerosols, imaging for clouds and ice, and more. That is, the VIIRS collects visible and infrared radiometric data of the Earth's atmosphere, ocean, and land surfaces. Data types include atmospheric, clouds, Earth radiation budget, land/water and sea surface temperature, ocean color, and low light imagery. This wide scope of measurements calls for the preparation of a multiplicity of Algorithm Theoretical Basis Documents (ATBDs), and, additionally, for intermediate products such as cloud mask, et al. Furthermore, the VIIRS interacts with three or more other sensors. This paper addresses selected and crucial elements of the process being used to convert and test an immense volume of a maturing and changing science code to the initial operational source code in preparation for launch of NPP. The integrity of the original science code is maintained and enhanced via baseline comparisons when re-hosted, in addition to multiple planned code performance reviews.

  17. Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC).

    PubMed

    Phillips, Zachary F; Chen, Michael; Waller, Laura

    2017-01-01

    We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification-an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC), is a single-shot variant of Differential Phase Contrast (DPC), which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps) for various in vitro cell samples and c. elegans in a micro-fluidic channel.

  18. The location and recognition of anti-counterfeiting code image with complex background

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  19. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    NASA Astrophysics Data System (ADS)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  20. Interactive QR code beautification with full background image embedding

    NASA Astrophysics Data System (ADS)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  1. Mississippi Delta, Radar Image with Colored Height

    NASA Image and Video Library

    2005-08-29

    The geography of the New Orleans and Mississippi delta region is well shown in this radar image from the Shuttle Radar Topography Mission. In this image, bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations. New Orleans is situated along the southern shore of Lake Pontchartrain, the large, roughly circular lake near the center of the image. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest over water highway bridge. Major portions of the city of New Orleans are below sea level, and although it is protected by levees and sea walls, flooding during storm surges associated with major hurricanes is a significant concern. http://photojournal.jpl.nasa.gov/catalog/PIA04175

  2. American Samoa, Shaded Relief and Colored Height

    NASA Image and Video Library

    2009-10-01

    The topography of Tutuila, largest of the islands of American Samoa, is well shown in this color-coded perspective view generated with digital elevation data from the Shuttle Radar Topography Mission (SRTM.) The total area of Tutuila is about 141.8 square kilometers (54.8 square miles), slightly larger than San Francisco. The large bay near the center in this view is Pago Pago Harbor, actually a submerged volcanic crater whose south wall collapsed millions of years ago. Adjacent to the harbor is Pago Pago, the capital of American Samoa, and to the left (west) of the harbor in this view is Matafao Peak, Tutuila’s highest point at 653 meters (2,142 feet). On September 29, 2009, a tsunami generated by a major undersea earthquake located about 200 kilometers (120 miles) southwest of Tutuila inundated the more heavily populated southern coast of the island with an ocean surge more than 3 meters (10 feet) deep, causing scores of casualties. Digital topographic data such as those produced by SRTM aid researchers and planners in predicting which coastal regions are at the most risk from such waves, as well as from the more common storm surges caused by tropical storms and even sea level rise. Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shaded image was derived by computing topographic slope in the northeast-southwest direction, so that northeast slopes appear bright and southwest slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations. The image was then projected using the elevation data to produce this perspective view, with the topography exaggerated by a factor of two. http://photojournal.jpl.nasa.gov/catalog/PIA11965

  3. Occator Topography

    NASA Image and Video Library

    2015-09-30

    This view, made using images taken by NASA Dawn spacecraft, is a color-coded topographic map of Occator crater on Ceres. Blue is the lowest elevation, and brown is the highest. http://photojournal.jpl.nasa.gov/catalog/PIA19975

  4. 3D-shape of objects with straight line-motion by simultaneous projection of color coded patterns

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; Ayubi, Gaston A.; Di Martino, J. Matías; Castillo, Oscar E.; Ferrari, Jose A.

    2018-05-01

    In this work, we propose a novel technique to retrieve the 3D shape of dynamic objects by the simultaneous projection of a fringe pattern and a homogeneous light pattern which are both coded in two of the color channels of a RGB image. The fringe pattern, red channel, is used to retrieve the phase by phase-shift algorithms with arbitrary phase-step, while the homogeneous pattern, blue channel, is used to match pixels from the test object in consecutive images, which are acquired at different positions, and thus, to determine the speed of the object. The proposed method successfully overcomes the standard requirement of projecting fringes of two different frequencies; one frequency to extract object information and the other one to retrieve the phase. Validation experiments are presented.

  5. Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC)

    PubMed Central

    2017-01-01

    We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification—an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC), is a single-shot variant of Differential Phase Contrast (DPC), which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps) for various in vitro cell samples and c. elegans in a micro-fluidic channel. PMID:28152023

  6. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  7. Determination of Three-Dimensional Left Ventricle Motion to Analyze Ventricular Dyssyncrony in SPECT Images

    NASA Astrophysics Data System (ADS)

    Rebelo, Marina de Sá; Aarre, Ann Kirstine Hummelgaard; Clemmesen, Karen-Louise; Brandão, Simone Cristina Soares; Giorgi, Maria Clementina; Meneghetti, José Cláudio; Gutierrez, Marco Antonio

    2009-12-01

    A method to compute three-dimension (3D) left ventricle (LV) motion and its color coded visualization scheme for the qualitative analysis in SPECT images is proposed. It is used to investigate some aspects of Cardiac Resynchronization Therapy (CRT). The method was applied to 3D gated-SPECT images sets from normal subjects and patients with severe Idiopathic Heart Failure, before and after CRT. Color coded visualization maps representing the LV regional motion showed significant difference between patients and normal subjects. Moreover, they indicated a difference between the two groups. Numerical results of regional mean values representing the intensity and direction of movement in radial direction are presented. A difference of one order of magnitude in the intensity of the movement on patients in relation to the normal subjects was observed. Quantitative and qualitative parameters gave good indications of potential application of the technique to diagnosis and follow up of patients submitted to CRT.

  8. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  9. Mapping of hydrothermally altered rocks using airborne multispectral scanner data, Marysvale, Utah, mining district

    USGS Publications Warehouse

    Podwysocki, M.H.; Segal, D.B.; Jones, O.D.

    1983-01-01

    Multispectral data covering an area near Marysvale, Utah, collected with the airborne National Aeronautics and Space Administration (NASA) 24-channel Bendix multispectral scanner, were analyzed to detect areas of hydrothermally altered, potentially mineralized rocks. Spectral bands were selected for analysis that approximate those of the Landsat 4 Thematic Mapper and which are diagnostic of the presence of hydrothermally derived products. Hydrothermally altered rocks, particularly volcanic rocks affected by solutions rich in sulfuric acid, are commonly characterized by concentrations of argillic minerals such as alunite and kaolinite. These minerals are important for identifying hydrothermally altered rocks in multispectral images because they have intense absorption bands centered near a wavelength of 2.2 ??m. Unaltered volcanic rocks commonly do not contain these minerals and hence do not have the absorption bands. A color-composite image was constructed using the following spectral band ratios: 1.6??m/2.2??m, 1.6??m/0.48??m, and 0.67??m/1.0??m. The particular bands were chosen to emphasize the spectral contrasts that exist for argillic versus non-argillic rocks, limonitic versus nonlimonitic rocks, and rocks versus vegetation, respectively. The color-ratio composite successfully distinguished most types of altered rocks from unaltered rocks. Some previously unrecognized areas of hydrothermal alteration were mapped. The altered rocks included those having high alunite and/or kaolinite content, siliceous rocks containing some kaolinite, and ash-fall tuffs containing zeolitic minerals. The color-ratio-composite image allowed further division of these rocks into limonitic and nonlimonitic phases. The image did not allow separation of highly siliceous or hematitically altered rocks containing no clays or alunite from unaltered rocks. A color-coded density slice image of the 1.6??m/2.2??m band ratio allowed further discrimination among the altered units. Areas containing zeolites and some ash-fall tuffs containing montmorillonite were readily recognized on the color-coded density slice as having less intense 2.2-??m absorption than areas of highly altered rocks. The areas of most intense absorption, as depicted in the color-coded density slice, are dominated by highly altered rocks containing large amounts of alunite and kaolinite. These areas form an annulus, approximately 10 km in diameter, which surrounds a quartz monzonite intrusive body of Miocene age. The patterns of most intense alteration are interpreted as the remnants of paleohydrothermal convective cells set into motion during the emplacement of the central intrusive body. ?? 1983.

  10. Black Hills

    Atmospheric Science Data Center

    2014-05-15

    ... 2004. The color-coded maps (along the bottom) provide a quantitative measurement of the sunlight reflected from these surfaces, and the ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...

  11. Detecting spatial defects in colored patterns using self-oscillating gels

    NASA Astrophysics Data System (ADS)

    Fang, Yan; Yashin, Victor V.; Dickerson, Samuel J.; Balazs, Anna C.

    2018-06-01

    With the growing demand for wearable computers, there is a need for material systems that can perform computational tasks without relying on external electrical power. Using theory and simulation, we design a material system that "computes" by integrating the inherent behavior of self-oscillating gels undergoing the Belousov-Zhabotinsky (BZ) reaction and piezoelectric (PZ) plates. These "BZ-PZ" units are connected electrically to form a coupled oscillator network, which displays specific modes of synchronization. We exploit this attribute in employing multiple BZ-PZ networks to perform pattern matching on complex multi-dimensional data, such as colored images. By decomposing a colored image into sets of binary vectors, we use each BZ-PZ network, or "channel," to store distinct information about the color and the shape of the image and perform the pattern matching operation. Our simulation results indicate that the multi-channel BZ-PZ device can detect subtle differences between the input and stored patterns, such as the color variation of one pixel or a small change in the shape of an object. To demonstrate a practical application, we utilize our system to process a colored Quick Response code and show its potential in cryptography and steganography.

  12. Image tools for UNIX

    NASA Technical Reports Server (NTRS)

    Banks, David C.

    1994-01-01

    This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.

  13. Hybrid 3D visualization of the chest and virtual endoscopy of the tracheobronchial system: possibilities and limitations of clinical application.

    PubMed

    Seemann, M D; Claussen, C D

    2001-06-01

    A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.

  14. A new definition for an old entity: improved definition of mitral valve prolapse using three-dimensional echocardiography and color-coded parametric models.

    PubMed

    Addetia, Karima; Mor-Avi, Victor; Weinert, Lynn; Salgo, Ivan S; Lang, Roberto M

    2014-01-01

    Differentiating between mitral valve (MV) prolapse (MVP) and MV billowing (MVB) on two-dimensional echocardiography is challenging. The aim of this study was to test the hypothesis that color-coded models of maximal leaflet displacement from the annular plane into the atrium derived from three-dimensional transesophageal echocardiography would allow discrimination between these lesions. Three-dimensional transesophageal echocardiographic imaging of the MV was performed in 50 patients with (n = 38) and without (n = 12) degenerative MV disease. Definitive diagnosis of MVP versus MVB was made using inspection of dynamic three-dimensional renderings and multiple two-dimensional cut planes extracted from three-dimensional data sets. This was used as a reference standard to test an alternative approach, wherein the color-coded parametric models were inspected for integrity of the coaptation line and location of the maximally displaced portion of the leaflet. Diagnostic interpretations of these models by two independent readers were compared with the reference standard. In all cases of MVP, the color-coded models depicted loss of integrity of the coaptation line and maximal leaflet displacement extending to the coaptation line. MVB was depicted by preserved leaflet apposition with maximal displacement away from the coaptation line. Interpretation of the 50 color-coded models by novice readers took 5 to 10 min and resulted in good agreement with the reference technique (κ = 0.81 and κ = 0.73 for the two readers). Three-dimensional color-coded models provide a static display of MV leaflet displacement, allowing differentiation between MVP and MVB, without the need to inspect multiple planes and while taking into account the saddle shape of the mitral annulus. Copyright © 2014 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.

  15. Human V4 Activity Patterns Predict Behavioral Performance in Imagery of Object Color.

    PubMed

    Bannert, Michael M; Bartels, Andreas

    2018-04-11

    Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery. SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an imagery task, suggesting it forms a perceptual hub for color perception. Copyright © 2018 the authors 0270-6474/18/383657-12$15.00/0.

  16. Hydration Map, Based on Mastcam Spectra, for Knorr Rock Target

    NASA Image and Video Library

    2013-03-18

    On this image of the rock target Knorr, color coding maps the amount of mineral hydration indicated by a ratio of near-infrared reflectance intensities measured by the Mastcam on NASA Mars rover Curiosity.

  17. Hydration Map, Based on Mastcam Spectra, for broken rock Tintina

    NASA Image and Video Library

    2013-03-18

    On this image of the broken rock called Tintina, color coding maps the amount of mineral hydration indicated by a ratio of near-infrared reflectance intensities measured by the Mastcam on NASA Mars rover Curiosity.

  18. False Color Terrain Model of Phoenix Workspace

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a terrain model of Phoenix's Robotic Arm workspace. It has been color coded by depth with a lander model for context. The model has been derived using images from the depth perception feature from Phoenix's Surface Stereo Imager (SSI). Red indicates low-lying areas that appear to be troughs. Blue indicates higher areas that appear to be polygons.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Ganymede in Visible and Infrared Light

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This montage compares New Horizons' best views of Ganymede, Jupiter's largest moon, gathered with the spacecraft's Long Range Reconnaissance Imager (LORRI) and its infrared spectrometer, the Linear Etalon Imaging Spectral Array (LEISA).

    LEISA observes its targets in more than 200 separate wavelengths of infrared light, allowing detailed analysis of their surface composition. The LEISA image shown here combines just three of these wavelengths -- 1.3, 1.8 and 2.0 micrometers -- to highlight differences in composition across Ganymede's surface. Blue colors represent relatively clean water ice, while brown colors show regions contaminated by dark material.

    The right panel combines the high-resolution grayscale LORRI image with the color-coded compositional information from the LEISA image, producing a picture that combines the best of both data sets.

    The LEISA and LORRI images were taken at 9:48 and 10:01 Universal Time, respectively, on February 27, 2007, from a range of 3.5 million kilometers (2.2 million miles). The longitude of the disk center is 38 degrees west. With a diameter of 5,268 kilometers (3,273 miles), Ganymede is the largest satellite in the solar system.

  20. Convolutional Sparse Coding for RGB+NIR Imaging.

    PubMed

    Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon

    2018-04-01

    Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

  1. A novel quantum LSB-based steganography method using the Gray code for colored quantum images

    NASA Astrophysics Data System (ADS)

    Heidari, Shahrokh; Farzadnia, Ehsan

    2017-10-01

    As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.

  2. Sinai Peninsula, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Sinai Peninsula, located between Africa and Asia, is a result of those two continents pulling apart from each other. Earth's crust is cracking, stretching, and lowering along the two northern branches of the Red Sea, namely the Gulf of Suez, seen here on the west (left), and the Gulf of Aqaba, seen to the east (right). This color-coded shaded relief image shows the triangular nature of the peninsula, with the coast of the Mediterranean Sea forming the northern side of the triangle. The Suez Canal can be seen as the narrow vertical blue line in the upper left connecting the Red Sea to the Mediterranean.

    The peninsula is divided into three distinct parts; the northern region consisting chiefly of sandstone, plains and hills, the central area dominated by the Tih Plateau, and the mountainous southern region where towering peaks abound. Much of the Sinai is deeply dissected by river valleys, or wadis, that eroded during an earlier geologic period and break the surface of the plateau into a series of detached massifs with a few scattered oases.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise,Washington, D.C.

    Location: 30 degrees north latitude, 34 degrees east longitude Orientation: North toward the top, Mercator projection Size: 289 by 445 kilometers (180 by 277 miles) Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  3. An Eye-Tracking Study of How Color Coding Affects Multimedia Learning

    ERIC Educational Resources Information Center

    Ozcelik, Erol; Karakus, Turkan; Kursun, Engin; Cagiltay, Kursat

    2009-01-01

    Color coding has been proposed to promote more effective learning. However, insufficient evidence currently exists to show how color coding leads to better learning. The goal of this study was to investigate the underlying cause of the color coding effect by utilizing eye movement data. Fifty-two participants studied either a color-coded or…

  4. Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.

    PubMed

    Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2016-08-01

    Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  5. Details of MESSENGER Impact Location

    NASA Image and Video Library

    2015-04-29

    These graphics show the current best prediction of the location and time of NASA MESSENGER impact on Mercury surface. These current best estimates are: Date: 30 April 2015 Time: 3:26:02 pm EDT 19:26:02 UTC Latitude: 54.4° N Longitude: 210.1° E. Traveling at 3.91 kilometers per second (over 8,700 miles per hour), the MESSENGER spacecraft will collide with Mercury's surface, creating a crater estimated to be 16 meters (52 feet) in diameter. View this image to learn about the named features and geology of this region on Mercury. Instruments: Mercury Dual Imaging System (MDIS) and Mercury Laser Altimeter (MLA) Top Image Latitude Range: 49°-59° N Top Image Longitude Range: 204°-217° E Topography in Top Image: Exaggerated by a factor of 5.5. Colors in Top Image: Coded by topography. The tallest regions are colored red and are roughly 3 kilometers (1.9 miles) higher than low-lying areas such as the floors of impact craters, colored blue. Scale in Top Image: The large crater on the left side of the image is Janacek, with a diameter of 48 kilometers (30 miles) http://photojournal.jpl.nasa.gov/catalog/PIA19443

  6. Topography of Gale Crater

    NASA Image and Video Library

    2011-11-21

    Color coding in this image of Gale Crater on Mars represents differences in elevation. The vertical difference from a low point inside the landing ellipse for NASA Mars Science Laboratory yellow dot to a high point on the mountain inside the crater.

  7. The use of computer-generated color graphic images for transient thermal analysis. [for hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Edwards, C. L. W.; Meissner, F. T.; Hall, J. B.

    1979-01-01

    Color computer graphics techniques were investigated as a means of rapidly scanning and interpreting large sets of transient heating data. The data presented were generated to support the conceptual design of a heat-sink thermal protection system (TPS) for a hypersonic research airplane. Color-coded vector and raster displays of the numerical geometry used in the heating calculations were employed to analyze skin thicknesses and surface temperatures of the heat-sink TPS under a variety of trajectory flight profiles. Both vector and raster displays proved to be effective means for rapidly identifying heat-sink mass concentrations, regions of high heating, and potentially adverse thermal gradients. The color-coded (raster) surface displays are a very efficient means for displaying surface-temperature and heating histories, and thereby the more stringent design requirements can quickly be identified. The related hardware and software developments required to implement both the vector and the raster displays for this application are also discussed.

  8. Color-coded perfusion analysis of CEUS for pre-interventional diagnosis of microvascularisation in cases of vascular malformations.

    PubMed

    Teusch, V I; Wohlgemuth, W A; Piehler, A P; Jung, E M

    2014-01-01

    Aim of our pilot study was the application of a contrast-enhanced color-coded ultrasound perfusion analysis in patients with vascular malformations to quantify microcirculatory alterations. 28 patients (16 female, 12 male, mean age 24.9 years) with high flow (n = 6) or slow-flow (n = 22) malformations were analyzed before intervention. An experienced examiner performed a color-coded Doppler sonography (CCDS) and a Power Doppler as well as a contrast-enhanced ultrasound after intravenous bolus injection of 1 - 2.4 ml of a second-generation ultrasound contrast medium (SonoVue®, Bracco, Milan). The contrast-enhanced examination was documented as a cine sequence over 60 s. The quantitative analysis based on color-coded contrast-enhanced ultrasound (CEUS) images included percentage peak enhancement (%peak), time to peak (TTP), area under the curve (AUC), and mean transit time (MTT). No side effects occurred after intravenous contrast injection. The mean %peak in arteriovenous malformations was almost twice as high as in slow-flow-malformations. The area under the curve was 4 times higher in arteriovenous malformations compared to the mean value of other malformations. The mean transit time was 1.4 times higher in high-flow-malformations compared to slow-flow-malformations. There was no difference regarding the time to peak between the different malformation types. The comparison between all vascular malformation and surrounding tissue showed statistically significant differences for all analyzed data (%peak, TTP, AUC, MTT; p < 0.01). High-flow and slow-flow vascular malformations had statistically significant differences in %peak (p < 0.01), AUC analysis (p < 0.01), and MTT (p < 0.05). Color-coded perfusion analysis of CEUS seems to be a promising technique for the dynamic assessment of microvasculature in vascular malformations.

  9. Incorporating 3-dimensional models in online articles.

    PubMed

    Cevidanes, Lucia H S; Ruellas, Antonio C O; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-05-01

    The aims of this article are to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article's online version for viewing and downloading using the reader's software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader's software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. When submitting manuscripts, authors can now upload 3D models that will allow readers to interact with or download them. Such interaction with 3D models in online articles now will give readers and authors better understanding and visualization of the results. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  10. Tumor-targeting Salmonella typhimurium A1-R Inhibits Osteosarcoma Angiogenesis in the In Vivo Gelfoam® Assay Visualized by Color-coded Imaging.

    PubMed

    Kiyuna, Tasuku; Tome, Yasunori; Uehara, Fuminari; Murakami, Takashi; Zhang, Yong; Zhao, Ming; Kanaya, Fuminori; Hoffman, Robert M

    2018-01-01

    We previously developed a color-coded imaging model that can quantify the length of nascent blood vessels using Gelfoam® implanted in nestin-driven green fluorescent protein (ND-GFP) nude mice. In this model, nascent blood vessels selectively express GFP. We also previously showed that osteosarcoma cells promote angiogenesis in this assay. We have also previously demonstrated the tumor-targeting bacteria Salmonella typhimurium A1-R (S. typhimurium A1-R) can inhibit or regress all tested tumor types in mouse models. The aim of the present study was to determine if S. typhimurium A1-R could inhibit osteosarcoma angiogenesis in the in vivo Gelfoam® color-coded imaging assay. Gelfoam® was implanted subcutaneously in ND-GFP nude mice. Skin flaps were made 7 days after implantation and 143B-RFP human osteosarcoma cells expressing red fluorescent protein (RFP) were injected into the implanted Gelfoam. After establishment of tumors in the Gelfoam®, control-group mice were treated with phosphate buffered saline via tail-vein injection (iv) and the experimental group was treated with S. typhimurium A1-R iv Skin flaps were made at day 7, 14, 21, and 28 after implantation of the Gelfoam® to allow imaging of vascularization in the Gelfoam® using a variable-magnification small-animal imaging system and confocal fluorescence microscopy. Nascent blood vessels expressing ND-GFP extended into the Gelfoam® over time in both groups. However, the extent of nascent blood-vessel growth was significantly inhibited by S. typhimurium A1-R treatment by day 28. The present results indicate S. typhimurium A1-R has potential for anti-angiogenic targeted therapy of osteosarcoma. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  11. Development of a Mars Surface Imager

    NASA Technical Reports Server (NTRS)

    Squyres, Steve W.

    1994-01-01

    The Mars Surface Imager (MSI) is a multispectral, stereoscopic, panoramic imager that allows imaging of the full scene around a Mars lander from the lander body to the zenith. It has two functional components: panoramic imaging and sky imaging. In the most recent version of the MSI, called PIDDP-cam, a very long multi-line color CCD, an innovative high-performance drive system, and a state-of-the-art wavelet image compression code have been integrated into a single package. The requirements for the flight version of the MSI and the current design are presented.

  12. Steganography: LSB Methodology

    DTIC Science & Technology

    2012-08-02

    images ; LSB Embedding Angel Sierra, Dr. Alfredo Cruz (Advisor) Polytechnic University of Puerto Rico 377 Ponce De Leon Hato Rey San Juan, PR 00918...notepad document as the message input. - Reviewed the battlesteg algorithm java code. POLYTECHNIC UNIVERSITY OF PUERTO RICO Steganography : LSB ...of LSB steganography in grayscale and color images . In J. Dittmann, K. Nahrstedt, and P. Wohlmacher, editors, Proceedings of the ACM, Special

  13. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System.

    PubMed

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-09-03

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.

  14. Analysis and Compensation for Lateral Chromatic Aberration in a Color Coding Structured Light 3D Measurement System

    PubMed Central

    Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin

    2016-01-01

    While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174

  15. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  16. STARL -- a Program to Correct CCD Image Defects

    NASA Astrophysics Data System (ADS)

    Narbutis, D.; Vanagas, R.; Vansevičius, V.

    We present a program tool, STARL, designed for automatic detection and correction of various defects in CCD images. It uses genetic algorithm for deblending and restoring of overlapping saturated stars in crowded stellar fields. Using Subaru Telescope Suprime-Cam images we demonstrate that the program can be implemented in the wide-field survey data processing pipelines for production of high quality color mosaics. The source code and examples are available at the STARL website.

  17. Dwarfs in Coma Cluster

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] Click on image for larger poster version

    This false-color mosaic of the central region of the Coma cluster combines infrared and visible-light images to reveal thousands of faint objects (green). Follow-up observations showed that many of these objects, which appear here as faint green smudges, are dwarf galaxies belonging to the cluster. Two large elliptical galaxies, NGC 4889 and NGC 4874, dominate the cluster's center. The mosaic combines visible-light data from the Sloan Digital Sky Survey (color coded blue) with long- and short-wavelength infrared views (red and green, respectively) from NASA's Spitzer Space Telescope.

  18. Thermal Protection System Imagery Inspection Management System -TIIMS

    NASA Technical Reports Server (NTRS)

    Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.

    2011-01-01

    TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.

  19. Color Vision and Performance on Color-Coded Cockpit Displays.

    PubMed

    Gaska, James P; Wright, Steven T; Winterbottom, Marc D; Hadley, Steven C

    Although there are numerous studies that demonstrate that color vision deficient (CVD) individuals perform less well than color vision normal (CVN) individuals in tasks that require discrimination or identification of colored stimuli, there remains a need to quantify the relationship between the type and severity of CVD and performance on operationally relevant tasks. Participants were classified as CVN (N = 45) or CVD (N = 49) using the Rabin cone contrast test, which is the standard color vision screening test used by the United States Air Force. In the color condition, test images that were representative of the size, shape, and color of symbols and lines used on fifth-generation fighter aircraft displays were used to measure operational performance. In the achromatic condition, all symbols and lines had the same chromaticity but differed in luminance. Subjects were asked to locate and discriminate between friend vs. foe symbols (red vs. green, or brighter vs. dimmer) while speed and accuracy were recorded. Increasing color deficiency was associated with decreasing speed and accuracy for the color condition (R 2 > 0.2), but not for the achromatic condition. Mean differences between CVN and CVD individuals showed the same pattern. Although lower CCT scores are clearly associated with lower performance in color related tasks, the magnitude of the performance loss was relatively small and there were multiple examples of high-performing CVD individuals who had higher operational scores than low-performing CVN individuals. Gaska JP, Wright ST, Winterbottom MD, Hadley SC. Color vision and performance on color-coded cockpit displays. Aerosp Med Hum Perform. 2016; 87(11):921-927.

  20. Context of Carbonate Rocks in Heavily Eroded Martian Terrain

    NASA Image and Video Library

    2008-12-18

    The color coding on this CRISM composite image of an area on Mars is based on infrared spectral information interpreted as evidence of various minerals present. Carbonate, which is indicative of a wet and non-acidic history, occurs in very small patches.

  1. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  2. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  3. Joint Chroma Subsampling and Distortion-Minimization-Based Luma Modification for RGB Color Images With Application.

    PubMed

    Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao

    2017-10-01

    In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.

  4. Mars Odyssey Observes Deimos

    NASA Image and Video Library

    2018-02-22

    Colors in this image of the Martian moon Deimos indicate a range of surface temperatures detected by observing the moon on February 15, 2018, with the Thermal Emission Imaging System (THEMIS) camera on NASA's Mars Odyssey orbiter. The left edge of the small moon is in darkness, and the right edge in sunlight. Temperature information was derived from thermal-infrared imaging such as the grayscale image shown smaller at lower left with the moon in the same orientation. The color-coding merges information from THEMIS observations made in 10 thermal-infrared wavelength bands. This was the first observation of Deimos by Mars Odyssey; the spacecraft first imaged Mars' other moon, Phobos, on September 29, 2017. Researchers have been using THEMIS to examine Mars since early 2002, but the maneuver turning the orbiter around to point the camera at Phobos was developed only recently. https://photojournal.jpl.nasa.gov/catalog/PIA22250

  5. Thermal Transfer Compared To The Fourteen Other Imaging Technologies

    NASA Astrophysics Data System (ADS)

    O'Leary, John W.

    1989-07-01

    A quiet revolution in the world of imaging has been underway for the past few years. The older technologies of dot matrix, daisy wheel, thermal paper and pen plotters have been increasingly displaced by laser, ink jet and thermal transfer. The net result of this revolution is improved technologies that afford superior imaging, quiet operation, plain paper usage, instant operation, and solid state components. Thermal transfer is one of the processes that incorporates these benefits. Among the imaging application for thermal transfer are: 1. Bar code labeling and scanning. 2. New systems for airline ticketing, boarding passes, reservations, etc. 3. Color computer graphics and imaging. 4. Copying machines that copy in color. 5. Fast growing communications media such as facsimile. 6. Low cost word processors and computer printers. 7. New devices that print pictures from video cameras or television sets. 8. Cameras utilizing computer chips in place of film.

  6. Watermarking spot colors in packaging

    NASA Astrophysics Data System (ADS)

    Reed, Alastair; Filler, TomáÅ.¡; Falkenstern, Kristyn; Bai, Yang

    2015-03-01

    In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.

  7. Color-coded Imaging Enables Fluorescence-guided Surgery to Resect the Tumor Along with the Tumor Microenvironment in a Syngeneic Mouse Model of EL-4 Lymphoma.

    PubMed

    Hasegawa, Kosuke; Suetsugu, Atsushi; Nakamura, Miki; Matsumoto, Takuro; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2016-09-01

    Fluorescence-guided surgery (FGS) of cancer is an emerging technology. We have previously shown the importance of resecting both the tumor and the tumor microenvironment (TME) for curative FGS. We also previously developed a syngeneic model using the mouse lymphoma cell line EL-4, expressing red fluorescent protein (EL-4-RFP), growing in green fluorescent protein (GFP) transgenic mice, which we have used in the present report to develop FGS of the tumor microenvironment. EL-4-RFP lymphoma cells were injected subcutaneously in C57/BL6 GFP transgenic mice. EL-4-RFP cells subsequently formed tumors by 35 days after cell transplantation. Using the portable hand-held Dino-Lite digital imaging system, subcutaneous tumors were resected by FGS. Resected tumor tissues were visualized with the Olympus FV1000 confocal microscope. Using the Dino-Lite, subcutaneous tumors and the tumor microenvironment were clearly visualized and resected. In the resected tumor, host stromal cells, including adipocyte-like cells and blood vessels with lymphocytes, were observed by confocal microscopy in addition to cancer cells by color-coded confocal imaging. The cancer cells and stromal cells in the TME were deeply intermingled in a highly-complex pattern. Color-coded FGS is an effective method to completely resect cancer cells along with the stromal cells in the TME which interact in a highly-complex pattern. Microscopically, cancer cells invade the TME and vice versa. To prevent tumor recurrence, it is necessary to resect the TME along with the tumor. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  8. Neuroimaging findings in Joubert syndrome with C5orf42 gene mutations: A milder form of molar tooth sign and vermian hypoplasia.

    PubMed

    Enokizono, Mikako; Aida, Noriko; Niwa, Tetsu; Osaka, Hitoshi; Naruto, Takuya; Kurosawa, Kenji; Ohba, Chihiro; Suzuki, Toshifumi; Saitsu, Hirotomo; Goto, Tomohide; Matsumoto, Naomichi

    2017-05-15

    Little is known regarding neuroimaging-genotype correlations in Joubert syndrome (JBTS). To elucidate one of these correlations, we investigated the neuroimaging findings of JBTS patients with C5orf42 mutations. Neuroimaging findings in five JBTS patients with C5orf42 mutations were retrospectively assessed with regard to the infratentorial and supratentorial structures on T1-magnetization prepared rapid gradient echo (MPRAGE), T2-weighted images, and color-coded fractional anisotropy (FA) maps; the findings were compared to those in four JBTS patients with mutations in other genes (including three with AHI1 and one with TMEM67 mutations). In C5orf42-mutant patients, the infratentorial magnetic resonance (MR) images showed normal or minimally thickened and minimally elongated superior cerebellar peduncles (SCP), normal or minimally deepened interpeduncular fossa (IF), and mild vermian hypoplasia (VH). However, in other patients, all had severe abnormalities in the SCP and IF, and moderate to marked VH. Supratentorial abnormalities were found in one individual in other JBTS. In JBTS with all mutations, color-coded FA maps showed the absence of decussation of the SCP (DSCP). The morphological neuroimaging findings in C5orf42-mutant JBTS were distinctly mild and made diagnosis difficult. However, the absence of DSCP on color-coded FA maps may facilitate the diagnosis of JBTS. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. New Orleans Topography, Radar Image with Colored Height

    NASA Image and Video Library

    2005-08-29

    The city of New Orleans, situated on the southern shore of Lake Pontchartrain, is shown in this radar image from the Shuttle Radar Topography Mission (SRTM). In this image bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the SRTM mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations. New Orleans is near the center of this scene, between the lake and the Mississippi River. The line spanning the lake is the Lake Pontchartrain Causeway, the world’s longest overwater highway bridge. Major portions of the city of New Orleans are actually below sea level, and although it is protected by levees and sea walls that are designed to protect against storm surges of 18 to 20 feet, flooding during storm surges associated with major hurricanes is a significant concern. http://photojournal.jpl.nasa.gov/catalog/PIA04174

  10. Color-Coded Imaging of Syngeneic Orthotopic Malignant Lymphoma Interacting with Host Stromal Cells During Metastasis.

    PubMed

    Matsumoto, Takuro; Suetsugu, Atsushi; Hasegawa, Kosuke; Nakamura, Miki; Aoki, Hitomi; Kunisada, Takahiro; Tsurumi, Hisashi; Shimizu, Masahito; Hoffman, Robert M

    2016-04-01

    The EL4 cell line was previously derived from a lymphoma induced in a C57/BL6 mouse by 9,10-dimethyl-1,2-benzanthracene. In a previous study, EL4 lymphoma cells expressing red fluorescent protein (EL4-RFP) were established and injected into the tail vein of C57/BL6 green fluorescent protein (GFP) transgenic mice. Metastasis was observed at multiple sites which were also enriched with host GFP-expressing stromal cells. In the present study, our aim was to establish an orthotopic model of EL4-RFP. In the present study, EL4-RFP lymphoma cells were injected in the spleen of C57/BL6 GFP transgenic mice as an orthotopic model of lymphoma. Resultant primary tumor and metastases were imaged with the Olympus FV1000 scanning laser confocal microscope. EL4-RFP metastasis was observed 21 days later. EL4-RFP tumors in the spleen (primary injection site), liver, supra-mediastinum lymph nodes, abdominal lymph nodes, bone marrow, and lung were visualized by color-coded imaging. EL4-RFP metastases in the liver, lymph nodes, and bone marrow in C57/BL6 GFP mice were rich in GFP stromal cells such as macrophages, fibroblasts, dendritic cells, and normal lymphocytes derived from the host animal. Small tumors were observed in the spleen, which were rich in host stromal cells. In the lung, no mass formation of lymphoma cells occurred, but lymphoma cells circulated in lung peripheral blood vessels. Phagocytosis of EL4-RFP lymphoma cells by macrophages, as well as dendritic cells and fibroblasts, were observed in culture. Color-coded imaging of the lymphoma microenvironment suggests an important role of stromal cells in lymphoma progression and metastasis. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  11. Airborne monitoring of crop canopy temperatures for irrigation scheduling and yield prediction

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Jackson, R. D.; Goettelman, R. C.; Reginato, R. J.; Idso, S. B.; Lapado, R. L.

    1977-01-01

    Airborne and ground measurements were made on April 1 and 29, 1976, over a USDA test site consisting mostly of wheat in various stages of water stress, but also including alfalfa and bare soil. These measurements were made to evaluate the feasibility of measuring crop temperatures from aircraft so that a parameter termed stress degree day, SDD, could be computed. Ground studies have shown that SDD is a valuable indicator of a crop's water needs, and that it can be related to irrigation scheduling and yield. The aircraft measurement program required predawn and afternoon flights coincident with minimum and maximum crop temperatures. Airborne measurements were made with an infrared line scanner and with color IR photography. The scanner data were registered, subtracted, and color-coded to yield pseudo-colored temperature-difference images. Pseudo-colored images reading directly in daily SDD increments were also produced. These maps enable a user to assess plant water status and thus determine irrigation needs and crop yield potentials.

  12. Effect of Color-Coded Notation on Music Achievement of Elementary Instrumental Students.

    ERIC Educational Resources Information Center

    Rogers, George L.

    1991-01-01

    Presents results of a study of color-coded notation to teach music reading to instrumental students. Finds no clear evidence that color-coded notation enhances achievement on performing by memory, sight-reading, or note naming. Suggests that some students depended on the color-coding and were unable to read uncolored notation well. (DK)

  13. Martian Moon Phobos in Thermal Infrared Image

    NASA Image and Video Library

    2017-10-04

    Colors in this image of the Martian moon Phobos indicate a range of surface temperatures detected by observing the moon on Sept. 29, 2017, with the Thermal Emission Imaging System (THEMIS) camera on NASA's Mars Odyssey orbiter. The left edge of the small moon was in darkness, and the right edge in morning sunlight. Phobos has an oblong shape with average diameter of about 14 miles (22 kilometers). Temperature information was derived from thermal-infrared imaging such as the grayscale image shown smaller at lower left with the moon in the same orientation. The color-coding merges information from THEMIS observations made in four thermal-infrared wavelength bands, centered from 11.04 microns to 14.88 microns. The scale bar correlates color-coding to the temperature range on the Kelvin scale, from 130 K (minus 226 degrees Fahrenheit) for dark purple to 270 K (26 degrees F) for red. Researchers will analyze the surface-temperature information from this observation and possible future THEMIS observations to learn how quickly the surface warms after sunup or cools after sundown. That could provide information about surface materials, because larger rocks heat or cool more slowly than smaller particles do. Researchers have been using THEMIS to examine Mars since early 2002, but the maneuver turning the orbiter around to point the camera at Phobos was developed only recently. Odyssey orbits Mars at an altitude of about 250 miles (400 kilometers), much closer to the planet than to Phobos, which orbits about 3,700 miles (6,000 kilometers) above the surface of Mars. The distance to Phobos from Odyssey during the observation was about 3,424 miles (5,511 kilometers). https://photojournal.jpl.nasa.gov/catalog/PIA21858

  14. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  15. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  16. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  17. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be... basic color for designating caution and for marking physical hazards such as: Striking against...

  18. Clinical applications of modern imaging technology: stereo image formation and location of brain cancer

    NASA Astrophysics Data System (ADS)

    Wang, Dezong; Wang, Jinxiang

    1994-05-01

    It is very important to locate the tumor for a patient, who has cancer in his brain. If he only gets X-CT or MRI pictures, the doctor does not know the size, shape location of the tumor and the relation between the tumor and other organs. This paper presents the formation of stereo images of cancer. On the basis of color code and color 3D reconstruction. The stereo images of tumor, brain and encephalic truncus are formed. The stereo image of cancer can be round on X, Y, Z-coordinates to show the shape from different directions. In order to show the location of tumor, stereo image of tumor and encephalic truncus are provided on different angles. The cross section pictures are also offered to indicate the relation of brain, tumor and encephalic truncus on cross sections. In this paper the calculating of areas, volume and the space between cancer and the side of the brain are also described.

  19. Mars Odyssey Observes Phobos

    NASA Image and Video Library

    2018-02-22

    Colors in this image of the Martian moon Phobos indicate a range of surface temperatures detected by observing the moon on February 15, 2018, with the Thermal Emission Imaging System (THEMIS) camera on NASA's Mars Odyssey orbiter. The left edge of the small moon is in darkness, and the right edge in sunlight. Phobos has an oblong shape with average diameter of about 14 miles (22 kilometers). Temperature information was derived from thermal-infrared imaging such as the grayscale image shown smaller at lower left with the moon in the same orientation. The color-coding merges information from THEMIS observations made in 10 thermal-infrared wavelength bands. This was the second observation of Phobos by Mars Odyssey; the first was on September 29, 2017. Researchers have been using THEMIS to examine Mars since early 2002, but the maneuver turning the orbiter around to point the camera at Phobos was developed only recently. https://photojournal.jpl.nasa.gov/catalog/PIA22249

  20. Fast encryption of RGB color digital images using a tweakable cellular automaton based schema

    NASA Astrophysics Data System (ADS)

    Faraoun, Kamel Mohamed

    2014-12-01

    We propose a new tweakable construction of block-enciphers using second-order reversible cellular automata, and we apply it to encipher RGB-colored images. The proposed construction permits a parallel encryption of the image content by extending the standard definition of a block cipher to take into account a supplementary parameter used as a tweak (nonce) to control the behavior of the cipher from one region of the image to the other, and hence avoid the necessity to use slow sequential encryption's operating modes. The proposed construction defines a flexible pseudorandom permutation that can be used with efficacy to solve the electronic code book problem without the need to a specific sequential mode. Obtained results from various experiments show that the proposed schema achieves high security and execution performances, and enables an interesting mode of selective area decryption due to the parallel character of the approach.

  1. Classification of endoscopic capsule images by using color wavelet features, higher order statistics and radial basis functions.

    PubMed

    Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L

    2008-01-01

    This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.

  2. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  3. Braiding by Majorana tracking and long-range CNOT gates with color codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2017-11-01

    Color-code quantum computation seamlessly combines Majorana-based hardware with topological error correction. Specifically, as Clifford gates are transversal in two-dimensional color codes, they enable the use of the Majoranas' non-Abelian statistics for gate operations at the code level. Here, we discuss the implementation of color codes in arrays of Majorana nanowires that avoid branched networks such as T junctions, thereby simplifying their realization. We show that, in such implementations, non-Abelian statistics can be exploited without ever performing physical braiding operations. Physical braiding operations are replaced by Majorana tracking, an entirely software-based protocol which appropriately updates the Majoranas involved in the color-code stabilizer measurements. This approach minimizes the required hardware operations for single-qubit Clifford gates. For Clifford completeness, we combine color codes with surface codes, and use color-to-surface-code lattice surgery for long-range multitarget CNOT gates which have a time overhead that grows only logarithmically with the physical distance separating control and target qubits. With the addition of magic state distillation, our architecture describes a fault-tolerant universal quantum computer in systems such as networks of tetrons, hexons, or Majorana box qubits, but can also be applied to nontopological qubit platforms.

  4. Image demosaicing: a systematic survey

    NASA Astrophysics Data System (ADS)

    Li, Xin; Gunturk, Bahadir; Zhang, Lei

    2008-01-01

    Image demosaicing is a problem of interpolating full-resolution color images from so-called color-filter-array (CFA) samples. Among various CFA patterns, Bayer pattern has been the most popular choice and demosaicing of Bayer pattern has attracted renewed interest in recent years partially due to the increased availability of source codes/executables in response to the principle of "reproducible research". In this article, we provide a systematic survey of over seventy published works in this field since 1999 (complementary to previous reviews 22, 67). Our review attempts to address important issues to demosaicing and identify fundamental differences among competing approaches. Our findings suggest most existing works belong to the class of sequential demosaicing - i.e., luminance channel is interpolated first and then chrominance channels are reconstructed based on recovered luminance information. We report our comparative study results with a collection of eleven competing algorithms whose source codes or executables are provided by the authors. Our comparison is performed on two data sets: Kodak PhotoCD (popular choice) and IMAX high-quality images (more challenging). While most existing demosaicing algorithms achieve good performance on the Kodak data set, their performance on the IMAX one (images with varying-hue and high-saturation edges) degrades significantly. Such observation suggests the importance of properly addressing the issue of mismatch between assumed model and observation data in demosaicing, which calls for further investigation on issues such as model validation, test data selection and performance evaluation.

  5. Photoplus: auxiliary information for printed images based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Samadani, Ramin; Mukherjee, Debargha

    2008-01-01

    A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.

  6. New procedures to evaluate visually lossless compression for display systems

    NASA Astrophysics Data System (ADS)

    Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim

    2017-09-01

    Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.

  7. The Use of Color-Coded Genograms in Family Therapy.

    ERIC Educational Resources Information Center

    Lewis, Karen Gail

    1989-01-01

    Describes a variable color-coding system which has been added to the standard family genogram in which characteristics or issues associated with a particular presenting problem or for a particular family are arbitrarily assigned a color. Presents advantages of color-coding, followed by clinical examples. (Author/ABL)

  8. Hybrid rendering of the chest and virtual bronchoscopy [corrected].

    PubMed

    Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D

    2000-10-30

    Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.

  9. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  10. Color coding of control room displays: the psychocartography of visual layering effects.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2007-06-01

    To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).

  11. Adding Composition Data About Mars Gullies

    NASA Image and Video Library

    2016-07-29

    The highly incised Martian gullies seen in the top image resemble gullies on Earth that are carved by liquid water. However, when the gullies are observed with the addition of mineralogical information (bottom), no evidence for alteration by water appears. The pictured area spans about 2 miles (3 kilometers) on the eastern rim of Hale Crater. The High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter took the visible-light image. Color-coded compositional information added in the lower version comes from the same orbiter's Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). Color coding in light blue corresponds to surface composition of unaltered mafic material, of volcanic origin. Mafic material from the crater rim is carved and transported downslope along the gully channels. No hydrated minerals are observed within the gullies, in the data from CRISM, indicating limited interaction or no interaction of the mafic material with liquid water. These findings and related observations at about 100 other gully sites on Mars suggest that a mechanism not requiring liquid water may be responsible for carving these gullies on Mars. (Gullies on Mars are a different type of feature than seasonal dark streaks called recurring slope lineae or RSL; water in the form of hydrated salt has been identified at RSL sites.) The HiRISE image is a portion of HiRISE observation PSP_002932_1445. The lower image is from the same HiRISE observation, with a CRISM mineral map overlaid. http://photojournal.jpl.nasa.gov/catalog/PIA20763

  12. Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.

    We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less

  13. A panoramic coded aperture gamma camera for radioactive hotspots localization

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Amgarou, K.; Blanc De Lanaute, N.; Schoepff, V.; Amoyal, G.; Mahe, C.; Beltramello, O.; Liénard, E.

    2017-11-01

    A known disadvantage of the coded aperture imaging approach is its limited field-of-view (FOV), which often results insufficient when analysing complex dismantling scenes such as post-accidental scenarios, where multiple measurements are needed to fully characterize the scene. In order to overcome this limitation, a panoramic coded aperture γ-camera prototype has been developed. The system is based on a 1 mm thick CdTe detector directly bump-bonded to a Timepix readout chip, developed by the Medipix2 collaboration (256 × 256 pixels, 55 μm pitch, 14.08 × 14.08 mm2 sensitive area). A MURA pattern coded aperture is used, allowing for background subtraction without the use of heavy shielding. Such system is then combined with a USB color camera. The output of each measurement is a semi-spherical image covering a FOV of 360 degrees horizontally and 80 degrees vertically, rendered in spherical coordinates (θ,phi). The geometrical shapes of the radiation-emitting objects are preserved by first registering and stitching the optical images captured by the prototype, and applying, subsequently, the same transformations to their corresponding radiation images. Panoramic gamma images generated by using the technique proposed in this paper are described and discussed, along with the main experimental results obtained in laboratories campaigns.

  14. Wavelet domain textual coding of Ottoman script images

    NASA Astrophysics Data System (ADS)

    Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.

    1996-02-01

    Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.

  15. Interconnecting smartphone, image analysis server, and case report forms in clinical trials for automatic skin lesion tracking in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.

    2016-03-01

    Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.

  16. Choroidal OCT

    NASA Astrophysics Data System (ADS)

    Esmaeelpour, Marieh; Drexler, Wolfgang

    Novel imaging devices, imaging strategies and automated image analysis with optical coherence tomography have improved our understanding of the choroid in health and pathology. Non-invasive in-vivo high resolution choroidal imaging has had its highest impact in the investigation of macular diseases such as diabetes macular edema and age-related macular degeneration. Choroidal thickness may provide a clinically feasible measure of disease stage and treatment success. It will even support disease diagnosis and phenotyping as is demonstrated in this chapter. Utilizing color coded thickness mapping of the choroid and its Sattler's and Haller's layer may further strengthen the sensitivity of the investigation findings.

  17. Rocks Here Sequester Some of Mars Early Atmosphere

    NASA Image and Video Library

    2015-09-02

    This view combines information from two instruments on NASA's Mars Reconnaissance Orbiter to map color-coded composition over the shape of the ground in a small portion of the Nili Fossae plains region of Mars' northern hemisphere. This site is part of the largest known carbonate-rich deposit on Mars. In the color coding used for this map, green indicates a carbonate-rich composition, brown indicates olivine-rich sands, and purple indicates basaltic composition. Carbon dioxide from the atmosphere on early Mars reacted with surface rocks to form carbonate, thinning the atmosphere by sequestering the carbon in the rocks. An analysis of the amount of carbon contained in Nili Fossae plains estimated the total at no more than twice the amount of carbon in the modern atmosphere of Mars, which is mostly carbon dioxide. That is much more than in all other known carbonate on Mars, but far short of enough to explain how Mars could have had a thick enough atmosphere to keep surface water from freezing during a period when rivers were cutting extensive valley networks on the Red Planet. Other possible explanations for the change from an era with rivers to dry modern Mars are being investigated. This image covers an area approximately 1.4 miles (2.3 kilometers) wide. A scale bar indicates 500 meters (1,640 feet). The full extent of the carbonate-containing deposit in the region is at least as large as Delaware and perhaps as large as Arizona. The color coding is from data acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM), in observation FRT0000C968 made on Sept. 19, 2008. The base map showing land shapes is from the High Resolution Imaging Science Experiment (HiRISE) camera. It is one product from HiRISE observation ESP_010351_2020, made July 20, 2013. http://photojournal.jpl.nasa.gov/catalog/PIA19817

  18. Color Coded Cards for Student Behavior Management in Higher Education Environments

    ERIC Educational Resources Information Center

    Alhalabi, Wadee; Alhalabi, Mobeen

    2017-01-01

    The Color Coded Cards system as a possibly effective class management tool is the focus of this research. The Color Coded Cards system involves each student being given a card with a specific color based on his or her behavior. The main objective of the research is to find out whether this system effectively improves students' behavior, thus…

  19. Neural dynamics of image representation in the primary visual cortex

    PubMed Central

    Yan, Xiaogang; Khambhati, Ankit; Liu, Lei; Lee, Tai Sing

    2013-01-01

    Horizontal connections in the primary visual cortex have been hypothesized to play a number of computational roles: association field for contour completion, surface interpolation, surround suppression, and saliency computation. Here, we argue that horizontal connections might also serve a critical role of computing the appropriate codes for image representation. That the early visual cortex or V1 explicitly represents the image we perceive has been a common assumption on computational theories of efficient coding (Olshausen and Field 1996), yet such a framework for understanding the circuitry in V1 has not been seriously entertained in the neurophysiological community. In fact, a number of recent fMRI and neurophysiological studies cast doubt on the neural validity of such an isomorphic representation (Cornelissen et al. 2006, von der Heydt et al. 2003). In this study, we investigated, neurophysiologically, how V1 neurons respond to uniform color surfaces and show that spiking activities of neurons can be decomposed into three components: a bottom-up feedforward input, an articulation of color tuning and a contextual modulation signal that is inversely proportional to the distance away from the bounding contrast border. We demonstrate through computational simulations that the behaviors of a model for image representation are consistent with many aspects of our neural observations. We conclude that the hypothesis of isomorphic representation of images in V1 remains viable and this hypothesis suggests an additional new interpretation of the functional roles of horizontal connections in the primary visual cortex. PMID:22944076

  20. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  1. Measurement of myocardial perfusion and infarction size using computer-aided diagnosis system for myocardial contrast echocardiography.

    PubMed

    Du, Guo-Qing; Xue, Jing-Yi; Guo, Yanhui; Chen, Shuang; Du, Pei; Wu, Yan; Wang, Yu-Hang; Zong, Li-Qiu; Tian, Jia-Wei

    2015-09-01

    Proper evaluation of myocardial microvascular perfusion and assessment of infarct size is critical for clinicians. We have developed a novel computer-aided diagnosis (CAD) approach for myocardial contrast echocardiography (MCE) to measure myocardial perfusion and infarct size. Rabbits underwent 15 min of coronary occlusion followed by reperfusion (group I, n = 15) or 60 min of coronary occlusion followed by reperfusion (group II, n = 15). Myocardial contrast echocardiography was performed before and 7 d after ischemia/reperfusion, and images were analyzed with the CAD system on the basis of eliminating particle swarm optimization clustering analysis. The myocardium was quickly and accurately detected using contrast-enhanced images, myocardial perfusion was quantitatively calibrated and a color-coded map calibrated by contrast intensity and automatically produced by the CAD system was used to outline the infarction region. Calibrated contrast intensity was significantly lower in infarct regions than in non-infarct regions, allowing differentiation of abnormal and normal myocardial perfusion. Receiver operating characteristic curve analysis documented that -54-pixel contrast intensity was an optimal cutoff point for the identification of infarcted myocardium with a sensitivity of 95.45% and specificity of 87.50%. Infarct sizes obtained using myocardial perfusion defect analysis of original contrast images and the contrast intensity-based color-coded map in computerized images were compared with infarct sizes measured using triphenyltetrazolium chloride staining. Use of the proposed CAD approach provided observers with more information. The infarct sizes obtained with myocardial perfusion defect analysis, the contrast intensity-based color-coded map and triphenyltetrazolium chloride staining were 23.72 ± 8.41%, 21.77 ± 7.8% and 18.21 ± 4.40% (% left ventricle) respectively (p > 0.05), indicating that computerized myocardial contrast echocardiography can accurately measure infarct size. On the basis of the results, we believe the CAD method can quickly and automatically measure myocardial perfusion and infarct size and will, it is hoped, be very helpful in clinical therapeutics. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  2. MicroV Technology to Improve Transcranial Color Coded Doppler Examinations.

    PubMed

    Malferrari, Giovanni; Pulito, Giuseppe; Pizzini, Attilia Maria; Carraro, Nicola; Meneghetti, Giorgio; Sanzaro, Enzo; Prati, Patrizio; Siniscalchi, Antonio; Monaco, Daniela

    2018-05-04

    The purpose of this review is to provide an update on technology related to Transcranial Color Coded Doppler Examinations. Microvascularization (MicroV) is an emerging Power Doppler technology which can allow visualization of low and weak blood flows even at high depths, thus providing a suitable technique for transcranial ultrasound analysis. With MicroV, reconstruction of the vessel shape can be improved, without any overestimation. Furthermore, by analyzing the Doppler signal, MicroV allows a global image of the Circle of Willis. Transcranial Doppler was originally developed for the velocimetric analysis of intracranial vessels, in particular to detect stenoses and the assessment of collateral circulation. Doppler velocimetric analysis was then compared to other neuroimaging techniques, thus providing a cut-off threshold. Transcranial Color Coded Doppler sonography allowed the characterization of vessel morphology. In both Color Doppler and Power Doppler, the signal overestimated the shape of the intracranial vessels, mostly in the presence of thin vessels and high depths of study. In further neurosonology technology development efforts, attempts have been made to address morphology issues and overcome technical limitations. The use of contrast agents has helped in this regard by introducing harmonics and subtraction software, which allowed better morphological studies of vessels, due to their increased signal-to-noise ratio. Having no limitations in the learning curve, in time and contrast agent techniques, and due to its high signal-to-noise ratio, MicroV has shown great potential to obtain the best morphological definition. Copyright © 2018 by the American Society of Neuroimaging.

  3. Color-Coded Imaging of Breast Cancer Metastatic Niche Formation in Nude Mice.

    PubMed

    Suetsugu, Atsushi; Momiyama, Masashi; Hiroshima, Yukihiko; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M

    2015-12-01

    We report here a color-coded imaging model in which metastatic niches in the lung and liver of breast cancer can be identified. The transgenic green fluorescent protein (GFP)-expressing nude mouse was used as the host. The GFP nude mouse expresses GFP in all organs. However, GFP expression is dim in the liver parenchymal cells. Mouse mammary tumor cells (MMT 060562) (MMT), expressing red fluorescent protein (RFP), were injected in the tail vein of GFP nude mice to produce experimental lung metastasis and in the spleen of GFP nude mice to establish a liver metastasis model. Niche formation in the lung and liver metastasis was observed using very high resolution imaging systems. In the lung, GFP host-mouse cells accumulated around as few as a single MMT-RFP cell. In addition, GFP host cells were observed to form circle-shaped niches in the lung even without RFP cancer cells, which was possibly a niche in which future metastasis could be formed. In the liver, as with the lung, GFP host cells could form circle-shaped niches. Liver and lung metastases were removed surgically and cultured in vitro. MMT-RFP cells and GFP host cells resembling cancer-associated fibroblasts (CAFs) were observed interacting, suggesting that CAFs could serve as a metastatic niche. © 2015 Wiley Periodicals, Inc.

  4. Color in the Cortex—single- and double-opponent cells

    PubMed Central

    Shapley, Robert; Hawken, Michael

    2011-01-01

    This is a review of the research during the past 25 years on cortical processing of color signals. At the beginning of the period the modular view of cortical processing predominated. However, at present an alternative view, that color and form are linked inextricably in visual cortical processing, is more persuasive than it seemed in 1985. Also, the role of the primary visual cortex, V1, in color processing now seems much larger than it did in 1985. The re-evaluation of the important role of V1 in color vision was caused in part by investigations of human V1 responses to color, measured with functional magnetic resonance imaging, fMRI, and in part by the results of numerous studies of single-unit neurophysiology in non-human primates. The neurophysiological results have highlighted the importance of double-opponent cells in V1. Another new concept is population coding of hue, saturation, and brightness in cortical neuronal population activity. PMID:21333672

  5. Ceres Topographic Globe Animation

    NASA Image and Video Library

    2015-07-28

    This frame from an animation shows a color-coded map from NASA Dawn mission revealing the highs and lows of topography on the surface of dwarf planet Ceres. The color scale extends 3.7 miles (6 kilometers) below the surface in purple to 3.7 miles (6 kilometers) above the surface in brown. The brightest features (those appearing nearly white) -- including the well-known bright spots within a crater in the northern hemisphere -- are simply reflective areas, and do not represent elevation. The topographic map was constructed from analyzing images from Dawn's framing camera taken from varying sun and viewing angles. The map was combined with an image mosaic of Ceres and projected onto a 3-D shape model of the dwarf planet to create the animation. http://photojournal.jpl.nasa.gov/catalog/PIA19605

  6. SRTM Colored Height and Shaded Relief: Sredinnyy Khrebet, Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    2001-01-01

    The Kamchatka Peninsula in eastern Russia is shown in this scene created from a preliminary elevation model derived from the first data collected during the Shuttle Radar Topography Mission (SRTM) on February 12, 2000. Sredinnyy Khrebet, the mountain range that makes up the spine of the peninsula, is a chain of active volcanic peaks. Pleistocene and recent glaciers have carved the broad valleys and jagged ridges that are common here. The relative youth of the volcanism is revealed by the topography as infilling and smoothing of the otherwise rugged terrain by lava, ash, and pyroclastic flows, particularly surrounding the high peaks in the south central part of the image. Elevations here range from near sea level up to 2,618 meters (8,590 feet).

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the north-south direction. Northern slopes appear bright and southern slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red, and magenta, to white at the highest elevations.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.

    Size: 93.0 x 105.7 kilometers ( 57.7 x 65.6 miles) Location: 58.3 deg. North lat., 160.9 deg. East lon. Orientation: North toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 12, 2000

  7. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  8. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  9. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  10. Olduvai Gorge, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Three striking and important areas of Tanzania in eastern Africa are shown in this color-coded shaded relief image from the Shuttle Radar Topography Mission. The largest circular feature in the center right is the caldera, or central crater, of the extinct volcano Ngorongoro. It is surrounded by a number of smaller volcanoes, all associated with the Great Rift Valley, a geologic fault system that extends for about 4,830 kilometers (2,995 miles) from Syria to central Mozambique.

    Ngorongoro's caldera is 22.5 kilometers (14 miles) across at its widest point and is 610 meters (2,000 feet) deep. Its floor is very level, holding a lake fed by streams running down the caldera wall. It is part of the Ngorongoro Conservation Area and is home to over 75,000 animals. The lakes south of the crater are Lake Eyasi and Lake Manyara, also part of the conservation area.

    The relatively smooth region in the upper left of the image is the Serengeti National Park, the largest in Tanzania. The park encompasses the main part of the Serengeti ecosystem, supporting the greatest remaining concentration of plains game in Africa including more than 3,000,000 large mammals. The animals roam the park freely and in the spectacular migrations, huge herds of wild animals move to other areas of the park in search of greener grazing grounds (requiring over 4,000 tons of grass each day) and water.

    The faint, nearly horizontal line near the center of the image is Olduvai Gorge, made famous by the discovery of remains of the earliest humans to exist. Between 1.9 and 1.2 million years ago a salt lake occupied this area, followed by the appearance of fresh water streams and small ponds. Exposed deposits show rich fossil fauna, many hominid remains and items belonging to one of the oldest stone tool technologies, called Olduwan. The time span of the objects recovered dates from 2,100,000 to 15,000 years ago.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Location: 3 degrees south latitude, 35 degrees east longitude Orientation: North toward the top, Mercator projection Size: 223 by 223 kilometers (138 by 138 miles) Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  11. Development of a vision-based pH reading system

    NASA Astrophysics Data System (ADS)

    Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon

    2015-10-01

    pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.

  12. Color visualization for fluid flow prediction

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Speray, D. E.

    1982-01-01

    High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.

  13. Teaching Speech Organization and Outlining Using a Color-Coded Approach.

    ERIC Educational Resources Information Center

    Hearn, Ralene

    The organization/outlining unit in the basic Public Speaking course can be made more interesting by using a color-coded instructional method that captivates students, facilitates understanding, and provides the opportunity for interesting reinforcement activities. The two part lesson includes a mini-lecture with a color-coded outline and a two…

  14. Imitation Learning Errors Are Affected by Visual Cues in Both Performance and Observation Phases.

    PubMed

    Mizuguchi, Takashi; Sugimura, Ryoko; Shimada, Hideaki; Hasegawa, Takehiro

    2017-08-01

    Mechanisms of action imitation were examined. Previous studies have suggested that success or failure of imitation is determined at the point of observing an action. In other words, cognitive processing after observation is not related to the success of imitation; 20 university students participated in each of three experiments in which they observed a series of object manipulations consisting of four elements (hands, tools, object, and end points) and then imitated the manipulations. In Experiment 1, a specific intially observed element was color coded, and the specific manipulated object at the imitation stage was identically color coded; participants accurately imitated the color coded element. In Experiment 2, a specific element was color coded at the observation but not at the imitation stage, and there were no effects of color coding on imitation. In Experiment 3, participants were verbally instructed to attend to a specific element at the imitation stage, but the verbal instructions had no effect. Thus, the success of imitation may not be determined at the stage of observing an action and color coding can provide a clue for imitation at the imitation stage.

  15. Overview of the Impact Region

    NASA Image and Video Library

    2015-04-29

    On April 30th, this region of Mercury's surface will have a new crater! Traveling at 3.91 kilometers per second (over 8,700 miles per hour), the MESSENGER spacecraft will collide with Mercury's surface, creating a crater estimated to be 16 meters (52 feet) in diameter. The large, 400-kilometer-diameter (250-mile-diameter), impact basin Shakespeare occupies the bottom left quarter of this image. Shakespeare is filled with smooth plains material, likely due to extensive lava flooding the basin in the past. As of 24 hours before the impact, the current best estimates predict that the spacecraft will strike a ridge slightly to the northeast of Shakespeare. View this image to see more details of the predicted impact site and time. Instrument: Mercury Dual Imaging System (MDIS) and Mercury Laser Altimeter (MLA) Latitude Range: 49°-59° N Longitude Range: 204°-217° E Topography: Exaggerated by a factor of 5.5. Colors: Coded by topography. The tallest regions are colored red and are roughly 3 kilometers (1.9 miles) higher than low-lying areas such as the floors of impact craters, colored blue. Scale: The large crater on the left side of the image is Janacek, with a diameter of 48 kilometers (30 miles) http://photojournal.jpl.nasa.gov/catalog/PIA19444

  16. Temperature Map, "Bonneville Crater" (1:35 p.m.)

    NASA Image and Video Library

    2004-05-17

    Rates of change in surface temperatures during a martian day indicate differences in particle size in and near "Bonneville Crater." This image is the third in a series of five with color-coded temperature information from different times of day. This one is from 1:35 p.m. local solar time at the site where NASA's Mars Exploration Rover Spirit is exploring Mars. Temperature information from Spirit's miniature thermal emission spectrometer is overlaid onto a view of the site from Spirit's panoramic camera. In this color-coded map, quicker reddening during the day suggests sand or dust. (Red is about 270 Kelvin or 27 degrees Fahrenheit.) An example of this is in the shallow depression in the right foreground. Areas that stay blue longer into the day have larger rocks. (Blue indicates about 230 Kelvin or minus 45 Degrees F.) An example is the rock in the left foreground. http://photojournal.jpl.nasa.gov/catalog/PIA05930

  17. The First Six Months of the LLNL-CfPA-MSSSO Search for Baryonic Dark Matter in the Galaxy's Halo via its Gravitational Microlensing Signature

    NASA Astrophysics Data System (ADS)

    Cook, K.; Alcock, C.; Allsman, R.; Axelrod, T.; Bennett, D.; Marshall, S.; Stubbs, C.; Griest, K.; Perlmutter, S.; Sutherland, W.; Freeman, K.; Peterson, B.; Quinn, P.; Rodgers, A.

    1992-12-01

    This collaboration, dubbed the MACHO Project (an acronym for MAssive Compact Halo Objects), has refurbished the 1.27-m, Great Melbourne Telescope at Mt. Stromlo and equipped it with a corrected {1°} FOV. The prime focus corrector yields a red and blue beam for simultaneous imaging in two passbands, 4500{ Angstroms}--6100{ Angstroms} and 6100{ Angstroms}--7900{ Angstroms}. Each beam is imaged by a 2x2 array of 2048x2048 pixel CCDs which are simultaneously read out from two amplifiers on each CCD. A 32 Megapixel dual-color image of 0.5 square degree is clocked directly into computer memory in less than 70 seconds. We are using this system to monitor more than 10(7) stars in the Magellanic Clouds for gravitational microlensing events and will soon monitor an additional 10(7) stars in the bulge of our galaxy. Image data goes directly into a reduction pipeline where photometry for stars in an image is determined and stored in a database. An early version of this pipeline has used a simple aperture photometry code and results from this will be presented. A more sophisticated PSF fitting photometry code is currently being installed in the pipeline and results should also be available at the meeting. The PSF fitting code has also been used to produce ~ 10(7) photometric measurements outside of the pipeline. This poster will present details of the instrumentation, data pipeline, observing conditions (weather and seeing), reductions and analyses for the first six months of dual-color observing. Eventually, we expect to be able to determine whether MACHOs are a significant component of the galactic halo in the mass range of \\(10^{-6} M_{\\sun} < M \\ {lower .5exhbox {\\: \\buildrel < \\over \\sim ;}} \\ 100 M_{\\sun}\\).

  18. Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules.

    PubMed

    Han, M; Gao, X; Su, J Z; Nie, S

    2001-07-01

    Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots (zinc sulfide-capped cadmium selenide nanocrystals) into polymeric microbeads at precisely controlled ratios. Their novel optical properties (e.g., size-tunable emission and simultaneous excitation) render these highly luminescent quantum dots (QDs) ideal fluorophores for wavelength-and-intensity multiplexing. The use of 10 intensity levels and 6 colors could theoretically code one million nucleic acid or protein sequences. Imaging and spectroscopic measurements indicate that the QD-tagged beads are highly uniform and reproducible, yielding bead identification accuracies as high as 99.99% under favorable conditions. DNA hybridization studies demonstrate that the coding and target signals can be simultaneously read at the single-bead level. This spectral coding technology is expected to open new opportunities in gene expression studies, high-throughput screening, and medical diagnostics.

  19. Career Education: An Idea Book.

    ERIC Educational Resources Information Center

    Portland Public Schools, OR.

    The career education idea book is aimed at the K-6 level and is designed to give useful ideas to teachers making career education a part of their program. The main thrust is developing an awareness of one's self and of the world of work. The book is color-coded for convenience; approximately half is comprised of activities in 11 areas: self-image;…

  20. On-chip frame memory reduction using a high-compression-ratio codec in the overdrives of liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha

    2010-11-01

    Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.

  1. Comparison of memory thresholds for planar qudit geometries

    NASA Astrophysics Data System (ADS)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  2. Bali, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The volcanic nature of the island of Bali is evident in this shaded relief image generated with data from the Shuttle Radar Topography Mission (SRTM).

    Bali, along with several smaller islands, make up one of the 27 Provinces of Indonesia. It lies over a major subduction zone where the Indo-Australian tectonic plate collides with the Sunda plate, creating one of the most volcanically active regions on the planet.

    The most significant feature on Bali is Gunung Agung, the symmetric, conical mountain at the right-center of the image. This 'stratovolcano,' 3,148 meters (10,308 feet) high, is held sacred in Balinese culture, and last erupted in 1963 after being dormant and thought inactive for 120 years. This violent event resulted in over 1,000 deaths, and coincided with a purification ceremony called Eka Dasa Rudra, meant to restore the balance between nature and man. This most important Balinese rite is held only once per century, and the almost exact correspondence between the beginning of the ceremony and the eruption is though to have great religious significance.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Location: 8.33 degrees South latitude, 115.17 degrees East longitude Orientation: North toward the top, Mercator projection Size: 153 by 112 kilometers (95 by 69 miles) Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  3. SRTM Colored Height and Shaded Relief: Las Bayas, Argentina

    NASA Technical Reports Server (NTRS)

    2001-01-01

    The interplay of volcanism, stream erosion and landslides is evident in this Shuttle Radar Topography Mission view of the eastern flank of the Andes Mountains, southeast of San Carlos de Bariloche, Argentina. Older lava flows emanating from the Andes once covered much of this area. Younger, local volcanoes (seen here as small peaks) then covered parts of the area with fresh, erosion resistant flows (seen here as very smooth surfaces). Subsequent erosion has created fine patterns on the older surfaces (bottom of the image) and bolder, irregular patterns through and around the younger surfaces (upper center and right center). Meanwhile, where a large stream immediately borders the resistant plateau (center of the image), lateral erosion has undercut the resistant plateau causing slivers of it to fall into the stream channel. This scene well illustrate show topographic data alone can reveal some aspects of recent geologic history.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the north-south direction. Northern slopes appear bright and southern slopes appear dark, as would be the case at noon at this latitude in the southern hemisphere. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red, and magenta, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on February 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.

    Size: 54.3 x 36.4 kilometers ( 33.7 x 22.6 miles) Location: 41.4 deg. South lat., 70.8 deg. West lon. Orientation: North toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 2000

  4. Grid point extraction and coding for structured light system

    NASA Astrophysics Data System (ADS)

    Song, Zhan; Chung, Ronald

    2011-09-01

    A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.

  5. Reading color barcodes using visual snakes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaub, Hanspeter

    2004-05-01

    Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less

  6. Ireland, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The island of Ireland comprises a large central lowland of limestone with a relief of hills surrounded by a discontinuous border of coastal mountains which vary greatly in geological structure. The mountain ridges of the south are composed of old red sandstone separated by limestone river valleys. Granite predominates in the mountains of Galway, Mayo and Donegal in the west and north-west and in Counties Down and Wicklow on the east coast, while a basalt plateau covers much of the north-east of the country. The central plain, which is broken in places by low hills, is extensively covered with glacial deposits of clay and sand. It has considerable areas of bog and numerous lakes. The island has seen at least two general glaciations and everywhere ice-smoothed rock, mountain lakes, glacial valleys and deposits of glacial sand, gravel and clay mark the passage of the ice.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Location: 53.5 degrees North latitude, 8 degrees West longitude Orientation: North toward the top, Mercator projection Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  7. Color-coding cancer and stromal cells with genetic reporters in a patient-derived orthotopic xenograft (PDOX) model of pancreatic cancer enhances fluorescence-guided surgery

    PubMed Central

    Yano, Shuya; Hiroshima, Yukihiko; Maawy, Ali; Kishimoto, Hiroyuki; Suetsugu, Atsushi; Miwa, Shinji; Toneri, Makoto; Yamamoto, Mako; Katz, Matthew H.G.; Fleming, Jason B.; Urata, Yasuo; Tazawa, Hiroshi; Kagawa, Shunsuke; Bouvet, Michael; Fujiwara, Toshiyoshi; Hoffman, Robert M.

    2015-01-01

    Precise fluorescence-guided surgery (FGS) for pancreatic cancer has the potential to greatly improve the outcome in this recalcitrant disease. In order to achieve this goal, we have used genetic reporters to color code cancer and stroma cells in a patient-derived orthotopic xenograft (PDOX) model. The telomerase-dependent green fluorescent protein (GFP) containing adenovirus OBP401 was used to label the cancer cells of the pancreatic cancer PDOX. The PDOX was previously grown in a red fluorescent protein (RFP) transgenic mouse that stably labeled the PDOX stroma cells bright red. The color-coded PDOX model enabled FGS to completely resect the pancreatic tumors including stroma. Dual-colored FGS significantly prevented local recurrence, which bright-light surgery (BLS) or single color could not. FGS, with color-coded cancer and stroma cells has important potential for improving the outcome of recalcitrant cancer. PMID:26088297

  8. How Color Coding Formulaic Writing Enhances Organization: A Qualitative Approach for Measuring Student Affect

    ERIC Educational Resources Information Center

    Geigle, Bryce A.

    2014-01-01

    The aim of this thesis is to investigate and present the status of student synthesis with color coded formula writing for grade level six through twelve, and to make recommendations for educators to teach writing structure through a color coded formula system in order to increase classroom engagement and lower students' affect. The thesis first…

  9. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  10. Color pictorial serpentine halftone for secure embedded data

    NASA Astrophysics Data System (ADS)

    Curry, Douglas N.

    1998-04-01

    This paper introduces a new rotatable glyph shape for trusted printing applications that has excellent image rendering, data storage and counterfeit deterrence properties. Referred to as a serpentine because it tiles into a meandering line screen, it can produce high quality images independent of its ability to embed data. The hafltone cell is constructed with hyperbolic curves to enhance its dynamic range, and generates low distortion because of rotational tone invariance with its neighbors. An extension to the process allows the data to be formatted into human readable text patterns, viewable with a magnifying glass, and therefore not requiring input scanning. The resultant embedded halftone patterns can be recognized as simple numbers (0 - 9) or alphanumerics (a - z). The pattern intensity can be offset from the surrounding image field intensity, producing a watermarking effect. We have been able to embed words such as 'original' or license numbers into the background halftone pattern of images which can be readily observed in the original image, and which conveniently disappear upon copying. We have also embedded data blocks with self-clocking codes and error correction data which are machine-readable. Finally, we have successfully printed full color images with both the embedded data and text, simulating a trusted printing application.

  11. Sri Lanka, Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The topography of the island nation of Sri Lanka is well shown in this color-coded shaded relief map generated with digital elevation data from the Shuttle Radar Topography Mission (SRTM).

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    For this special view heights below 10 meters (33 feet) above sea level have been colored red. These low coastal elevations extend 5 to 10 km (3.1 to 6.2 mi) inland on Sri Lanka and are especially vulnerable to flooding associated with storm surges, rising sea level, or, as in the aftermath of the earthquake of December 26, 2004, tsunami. These so-called tidal waves have occurred numerous times in history and can be especially destructive, but with the advent of the near-global SRTM elevation data planners can better predict which areas are in the most danger and help develop mitigation plans in the event of particular flood events.

    Sri Lanka is shaped like a giant teardrop falling from the southern tip of the vast Indian subcontinent. It is separated from India by the 50km (31mi) wide Palk Strait, although there is a series of stepping-stone coral islets known as Adam's Bridge that almost form a land bridge between the two countries. The island is just 350km (217mi) long and only 180km (112mi) wide at its broadest, and is about the same size as Ireland, West Virginia or Tasmania.

    The southern half of the island is dominated by beautiful and rugged hill country, and includes Mt Pidurutalagala, the islandaE(TM)s highest point at 2524 meters (8281 ft). The entire northern half comprises a large plain extending from the edge of the hill country to the Jaffna peninsula.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise,Washington, D.C.

    Location: 8.0 degrees North latitude, 80.7 degrees East longitude Orientation: North toward the top, Mercator projection Size: 275.6 by 482.4 kilometers (165.4 by 299.0 miles) Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  12. Colorful Revision: Color-Coded Comments Connected to Instruction

    ERIC Educational Resources Information Center

    Mack, Nancy

    2013-01-01

    Many teachers have had a favorable response to their experimentation with digital feedback on students' writing. Students much preferred a simpler system of highlighting and commenting in color. After experimentation the author found that this color-coded system was more effective for them and less time-consuming for her. Of course, any system…

  13. A conflict-based model of color categorical perception: evidence from a priming study.

    PubMed

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  14. Color and Grey Scale in Sonar Displays

    NASA Technical Reports Server (NTRS)

    Kraiss, K. F.; Kuettelwesch, K. H.

    1984-01-01

    In spite of numerous publications 1 it is still rather unclear, whether color is of any help in sonar displays. The work presented here deals with a particular type of sonar data, i.e., LOFAR-grams (low frequency analysing and recording) where acoustic sensor data are continuously written as a time-frequency plot. The question to be answered quantitatively is, whether color coding does improve target detection when compared with a grey scale code. The data show significant differences in receiver-operating characteristics performance for the selected codes. In addition it turned out, that the background noise level affects the performance dramatically for some color codes, while others remain stable or even improve. Generally valid rules are presented on how to generate useful color scales for this particular application.

  15. A video coding scheme based on joint spatiotemporal and adaptive prediction.

    PubMed

    Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken

    2009-05-01

    We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed.

  16. Colored Height and Shaded Relief, Central America

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Panama, Costa Rica, Nicaragua, El Salvador, Honduras, Guatemala, Belize, southern Mexico and parts of Cuba and Jamaica are all seen in this image from NASA's Shuttle Radar Topography Mission. The dominant feature of the northern part of Central America is the Sierra Madre Range, spreading east from Mexico between the narrow Pacific coastal plain and the limestone lowland of the Yucatan Peninsula. Parallel hill ranges sweep across Honduras and extend south, past the Caribbean Mosquito Coast to lakes Managua and Nicaragua. The Cordillera Central rises to the south, gradually descending to Lake Gatun and the Isthmus of Panama. A highly active volcanic belt runs along the Pacific seaboard from Mexico to Costa Rica.

    High-quality satellite imagery of Central America has, until now, been difficult to obtain due to persistent cloud cover in this region of the world. The ability of SRTM to penetrate clouds and make three-dimensional measurements has allowed the generation of the first complete high-resolution topographic map of the entire region. This map was used to generate the image.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the north-south direction. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red, and magenta, to white at the highest elevations.

    For an annotated version of this image, please select Figure 1, below: [figure removed for brevity, see original site] (Large image: 9 mB jpeg)

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on February 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (200-foot)-long mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 1720 by 1670 kilometers (1068 by 1036 miles) Location: 14.5 degrees North latitude, 85.0 degrees West longitude Orientation: North toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 2000

  17. Hierarchical content-based image retrieval by dynamic indexing and guided search

    NASA Astrophysics Data System (ADS)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  18. ASTER Images San Francisco Bay Area

    NASA Image and Video Library

    2000-04-26

    These images of the San Francisco Bay region were acquired on March 3, 2000 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. Each covers an area 60 kilometers (37 miles) wide and 75 kilometers (47 miles) long. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER will image the Earth for the next 6 years to map and monitor the changing surface of our planet. Upper Left: The color infrared composite uses bands in the visible and reflected infrared. Vegetation is red, urban areas are gray; sediment in the bays shows up as lighter shades of blue. Thanks to the 15 meter (50-foot) spatial resolution, shadows of the towers along the Bay Bridge can be seen. Upper right: A composite of bands in the short wave infrared displays differences in soils and rocks in the mountainous areas. Even though these regions appear entirely vegetated in the visible, enough surface shows through openings in the vegetation to allow the ground to be imaged. Lower left: This composite of multispectral thermal bands shows differences in urban materials in varying colors. Separation of materials is due to differences in thermal emission properties, analogous to colors in the visible. Lower right: This is a color coded temperature image of water temperature, derived from the thermal bands. Warm waters are in white and yellow, colder waters are blue. Suisun Bay in the upper right is fed directly from the cold Sacramento River. As the water flows through San Pablo and San Francisco Bays on the way to the Pacific, the waters warm up. http://photojournal.jpl.nasa.gov/catalog/PIA02605

  19. Transversal Clifford gates on folded surface codes

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-12

    Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less

  20. Multifocus microscopy with precise color multi-phase diffractive optics applied in functional neuronal imaging.

    PubMed

    Abrahamsson, Sara; Ilic, Rob; Wisniewski, Jan; Mehl, Brian; Yu, Liya; Chen, Lei; Davanco, Marcelo; Oudjedi, Laura; Fiche, Jean-Bernard; Hajj, Bassam; Jin, Xin; Pulupa, Joan; Cho, Christine; Mir, Mustafa; El Beheiry, Mohamed; Darzacq, Xavier; Nollmann, Marcelo; Dahan, Maxime; Wu, Carl; Lionnet, Timothée; Liddle, J Alexander; Bargmann, Cornelia I

    2016-03-01

    Multifocus microscopy (MFM) allows high-resolution instantaneous three-dimensional (3D) imaging and has been applied to study biological specimens ranging from single molecules inside cells nuclei to entire embryos. We here describe pattern designs and nanofabrication methods for diffractive optics that optimize the light-efficiency of the central optical component of MFM: the diffractive multifocus grating (MFG). We also implement a "precise color" MFM layout with MFGs tailored to individual fluorophores in separate optical arms. The reported advancements enable faster and brighter volumetric time-lapse imaging of biological samples. In live microscopy applications, photon budget is a critical parameter and light-efficiency must be optimized to obtain the fastest possible frame rate while minimizing photodamage. We provide comprehensive descriptions and code for designing diffractive optical devices, and a detailed methods description for nanofabrication of devices. Theoretical efficiencies of reported designs is ≈90% and we have obtained efficiencies of > 80% in MFGs of our own manufacture. We demonstrate the performance of a multi-phase MFG in 3D functional neuronal imaging in living C. elegans.

  1. DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2015-01-01

    Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.

  2. Redundant Coding in Visual Search Displays: Effects of Shape and Colour.

    DTIC Science & Technology

    1997-02-01

    results for refining color selection algorithms and for color coding in situations where the gamut of available colors is limited. In a secondary set of analyses, we note large performance differences as a function of target shape.

  3. Rapid Equipping Force (REF) Analytical Support

    DTIC Science & Technology

    2007-06-01

    document contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 44...interface and performs actions via Excel formulae, ActiveX controls, and VBA code. – Plan to provide both simple and complex weighting and scoring methods...Requirements Quad Chart. –Solution Set Information Worksheet: A spreadsheet containing detailed information concerning every potential solution considered

  4. A Spatiotemporal Clustering Approach to Maritime Domain Awareness

    DTIC Science & Technology

    2013-09-01

    1997. [25] M. E. Celebi, “Effective initialization of k-means for color quantization,” 16th IEEE International Conference on Image Processing (ICIP...release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Spatiotemporal clustering is the process of grouping...Department of Electrical and Computer Engineering iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Spatiotemporal clustering is the process of

  5. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2004-12-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  6. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  7. An Upgrade of the Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2015-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) code is used at NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used to design thermal protection systems to mitigate the risks due to the aeroheating loads on hypersonic vehicles, such as re-entry vehicles during descent and landing procedures. This code was originally written in the PV-WAVE programming language to analyze phosphor thermography data from the two-color, relativeintensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the code was migrated to MATLAB syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to batch process all of the data from a wind tunnel run, to map the two-dimensional heating distribution to a three-dimensional computer-aided design model of the vehicle to be viewed in Tecplot, and to extract data from a segmented line that follows an interesting feature in the data. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy code to validate the program. The differences between the two codes were on the order of 10-5 to 10-7. IHEAT 4.0 replaces the PV-WAVE version as the production code for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  8. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  9. The JPEG XT suite of standards: status and future plans

    NASA Astrophysics Data System (ADS)

    Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj

    2015-09-01

    The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.

  10. Mts. Agung and Batur, Bali, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This perspective view shows the major volcanic group of Bali, one 13,000 islands comprising the nation of Indonesia. The conical mountain to the left is Gunung Agung, at 3,148 meters (10,308 feet) the highest point on Bali and an object of great significance in Balinese religion and culture. Agung underwent a major eruption in 1963 after more than 100 years of dormancy, resulting in the loss of over 1,000 lives.

    In the center is the complex structure of Batur volcano, showing a caldera (volcanic crater) left over from a massive catastrophic eruption about 30,000 years ago. Judging from the total volume of the outer crater and the volcano, that once lay above it, approximately 140 cubic kilometers(33.4 cubic miles) of material must have been produced by this eruption, making it one of the largest known volcanic events on Earth. Batur is still active and has erupted at least 22 times since the 1800's.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Location: 8.33 degrees South latitude, 115.17 degrees East longitude Orientation: Looking southwest Size: scale varies in this perspective image Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  11. Quantitative Assessment of Neovascularization after Indirect Bypass Surgery: Color-Coded Digital Subtraction Angiography in Pediatric Moyamoya Disease.

    PubMed

    Cho, H-H; Cheon, J-E; Kim, S-K; Choi, Y H; Kim, I-O; Kim, W S; Lee, S-M; You, S K; Shin, S-M

    2016-05-01

    For the postoperative follow-up in pediatric patients with Moyamoya disease, it is essential to evaluate the degree of neovascularization status. Our aim was to quantitatively assess the neovascularization status after bypass surgery in pediatric Moyamoya disease by using color-coded digital subtraction angiography. Time-attenuation intensity curves were generated at ROIs corresponding to surgical flap sites from color-coded DSA images of the common carotid artery, internal carotid artery, and external carotid artery angiograms obtained pre- and postoperatively in 32 children with Moyamoya disease. Time-to-peak and area under the curve values were obtained. Postoperative changes in adjusted time-to-peak (ΔTTP) and ratios of adjusted area under the curve changes (ΔAUC ratio) of common carotid artery, ICA, and external carotid artery angiograms were compared across clinical and angiographic outcome groups. To analyze diagnostic performance, we categorized clinical outcomes into favorable and unfavorable groups. The ΔTTP at the common carotid artery increased among clinical and angiographic outcomes, in that order, with significant differences (P = .003 and .005, respectively). The ΔAUC ratio at the common carotid artery and external carotid artery also increased, in that order, among clinical and angiographic outcomes with a significant difference (all, P = .000). The ΔAUC ratio of ICA showed no significant difference among clinical and angiographic outcomes (P = .418 and .424, respectively). The ΔTTP for the common carotid artery of >1.27 seconds and the ΔAUC ratio of >33.5% for the common carotid artery and 504% for the external carotid artery are revealed as optimal cutoff values between favorable and unfavorable groups. Postoperative changes in quantitative values obtained with color-coded DSA software showed a significant correlation with outcome scores and can be used as objective parameters for predicting the outcome in pediatric Moyamoya disease, with an additional cutoff value calculated through the receiver operating characteristic curve. © 2016 by American Journal of Neuroradiology.

  12. A volcanic activity alert-level system for aviation: Review of its development and application in Alaska

    USGS Publications Warehouse

    Guffanti, Marianne C.; Miller, Thomas

    2013-01-01

    An alert-level system for communicating volcano hazard information to the aviation industry was devised by the Alaska Volcano Observatory (AVO) during the 1989–1990 eruption of Redoubt Volcano. The system uses a simple, color-coded ranking that focuses on volcanic ash emissions: Green—normal background; Yellow—signs of unrest; Orange—precursory unrest or minor ash eruption; Red—major ash eruption imminent or underway. The color code has been successfully applied on a regional scale in Alaska for a sustained period. During 2002–2011, elevated color codes were assigned by AVO to 13 volcanoes, eight of which erupted; for that decade, one or more Alaskan volcanoes were at Yellow on 67 % of days and at Orange or Red on 12 % of days. As evidence of its utility, the color code system is integrated into procedures of agencies responsible for air-traffic management and aviation meteorology in Alaska. Furthermore, it is endorsed as a key part of globally coordinated protocols established by the International Civil Aviation Organization to provide warnings of ash hazards to aviation worldwide. The color code and accompanying structured message (called a Volcano Observatory Notice for Aviation) comprise an effective early-warning message system according to the United Nations International Strategy for Disaster Reduction. The aviation color code system currently is used in the United States, Russia, New Zealand, Iceland, and partially in the Philippines, Papua New Guinea, and Indonesia. Although there are some barriers to implementation, with continued education and outreach to Volcano Observatories worldwide, greater use of the aviation color code system is achievable.

  13. A volcanic activity alert-level system for aviation: review of its development and application in Alaska

    USGS Publications Warehouse

    Guffanti, Marianne; Miller, Thomas P.

    2013-01-01

    An alert-level system for communicating volcano hazard information to the aviation industry was devised by the Alaska Volcano Observatory (AVO) during the 1989–1990 eruption of Redoubt Volcano. The system uses a simple, color-coded ranking that focuses on volcanic ash emissions: Green—normal background; Yellow—signs of unrest; Orange—precursory unrest or minor ash eruption; Red—major ash eruption imminent or underway. The color code has been successfully applied on a regional scale in Alaska for a sustained period. During 2002–2011, elevated color codes were assigned by AVO to 13 volcanoes, eight of which erupted; for that decade, one or more Alaskan volcanoes were at Yellow on 67 % of days and at Orange or Red on 12 % of days. As evidence of its utility, the color code system is integrated into procedures of agencies responsible for air-traffic management and aviation meteorology in Alaska. Furthermore, it is endorsed as a key part of globally coordinated protocols established by the International Civil Aviation Organization to provide warnings of ash hazards to aviation worldwide. The color code and accompanying structured message (called a Volcano Observatory Notice for Aviation) comprise an effective early-warning message system according to the United Nations International Strategy for Disaster Reduction. The aviation color code system currently is used in the United States, Russia, New Zealand, Iceland, and partially in the Philippines, Papua New Guinea, and Indonesia. Although there are some barriers to implementation, with continued education and outreach to Volcano Observatories worldwide, greater use of the aviation color code system is achievable.

  14. Shaded Relief and Radar Image with Color as Height, Madrid, Spain

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The white, mottled area in the right-center of this image from NASA's Shuttle Radar Topography Mission (SRTM) is Madrid, the capital of Spain. Located on the Meseta Central, a vast plateau covering about 40 percent of the country, this city of 3 million is very near the exact geographic center of the Iberian Peninsula. The Meseta is rimmed by mountains and slopes gently to the west and to the series of rivers that form the boundary with Portugal. The plateau is mostly covered with dry grasslands, olive groves and forested hills.

    Madrid is situated in the middle of the Meseta, and at an elevation of 646 meters (2,119 feet) above sea level is the highest capital city in Europe. To the northwest of Madrid, and visible in the upper left of the image, is the Sistema Central mountain chain that forms the 'dorsal spine' of the Meseta and divides it into northern and southern subregions. Rising to about 2,500 meters (8,200 feet), these mountains display some glacial features and are snow-capped for most of the year. Offering almost year-round winter sports, the mountains are also important to the climate of Madrid.

    Three visualization methods were combined to produce this image: shading and color coding of topographic height and radar image intensity. The shade image was derived by computing topographic slope in the northwest-southeast direction. North-facing slopes appear bright and south-facing slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and brown to white at the highest elevations. The shade image was combined with the radar intensity image in the flat areas.

    Elevation data used in this image was acquired by the SRTM aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 172 by 138 kilometers (107 by 86 miles) Location: 40.43 degrees North latitude, 3.70 degrees West longitude Orientation: North toward the top Image Data: shaded and colored SRTM elevation model, with SRTM radar intensity added Original Data Resolution: SRTM 1 arcsecond (about 30 meters or 98 feet) Date Acquired: February 2000

  15. Single-exposure super-resolved interferometric microscopy by RGB multiplexing in lensless configuration

    NASA Astrophysics Data System (ADS)

    Granero, Luis; Ferreira, Carlos; Zalevsky, Zeev; García, Javier; Micó, Vicente

    2016-07-01

    Single-Exposure Super-Resolved Interferometric Microscopy (SESRIM) reports on a way to achieve one-dimensional (1-D) superresolved imaging in digital holographic microscopy (DHM) by a single illumination shot and digital recording. SESRIM provides color-coded angular multiplexing of the accessible sample's range of spatial frequencies and it allows their recording in a single CCD (color or monochrome) snapshot by adding 3 RGB coherent reference beams at the output plane. In this manuscript, we extend the applicability of SESRIM to the field of digital in-line holographic microscopy (DIHM), that is, working without lenses. As consequence of the in-line configuration, an additional restriction concerning the object field of view (FOV) must be imposed to the technique. Experimental results are reported for both a synthetic object (USAF resolution test target) and a biological sample (swine sperm sample) validating this new kind of superresolution imaging method named as lensless SESRIM (L-SESRIM).

  16. Kepler Supernova Remnant: A View from Spitzer Space Telescope

    NASA Image and Video Library

    2004-10-06

    This Spitzer false-color image is a composite of data from the 24 micron channel of Spitzer's multiband imaging photometer (red), and three channels of its infrared array camera: 8 micron (yellow), 5.6 micron (blue), and 4.8 micron (green). Stars are most prominent in the two shorter wavelengths, causing them to show up as turquoise. The supernova remnant is most prominent at 24 microns, arising from dust that has been heated by the supernova shock wave, and re-radiated in the infrared. The 8 micron data shows infrared emission from regions closely associated with the optically emitting regions. These are the densest regions being encountered by the shock wave, and probably arose from condensations in the surrounding material that was lost by the supernova star before it exploded. The composite above (PIA06908, PIA06909, and PIA06910) represent views of Kepler's supernova remnant taken in X-rays, visible light, and infrared radiation. Each top panel in the composite above shows the entire remnant. Each color in the composite represents a different region of the electromagnetic spectrum, from X-rays to infrared light. The X-ray and infrared data cannot be seen with the human eye. Astronomers have color-coded those data so they can be seen in these images. http://photojournal.jpl.nasa.gov/catalog/PIA06910

  17. Sredinnyy Khrebet, Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Kamchatka Peninsula in eastern Russia is shown in this scene created from a preliminary elevation model derived from the first data collected during the Shuttle Radar Topography Mission (SRTM) on February 12, 2000. Sredinnyy Khrebet, the mountain range that makes up the spine of the peninsula, is a chain of active volcanic peaks. Pleistocene and recent glaciers have carved the broad valleys and jagged ridges that are common here. The relative youth of the volcanism is revealed by the topography as infilling and smoothing of the otherwise rugged terrain by lava, ash, and pyroclastic flows, particularly surrounding the high peaks in the south central part of the image. Elevations here range from near sea level up to 2,618 meters (8,590 feet). Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the north-south direction. Northern slopes appear bright and southern slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red, and magenta, to white at the highest elevations. Elevation data used in this image were acquired by the Shuttle Radar Topography Mission (SRTM) aboard Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC. Size: 93.0 x 105.7 kilometers ( 57.7 x 65.6 miles) Location: 58.3 deg. North lat., 160.9 deg. East lon. Orientation: North toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 12, 2000 Image courtesy NASA/JPL/NIMA

  18. Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj

    2007-09-01

    This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).

  19. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  20. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii... 29 Labor 5 2011-07-01 2011-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH...

  1. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    PubMed

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  2. Influence of Interpretation Aids on Attentional Capture, Visual Processing, and Understanding of Front-of-Package Nutrition Labels.

    PubMed

    Antúnez, Lucía; Giménez, Ana; Maiche, Alejandro; Ares, Gastón

    2015-01-01

    To study the influence of 2 interpretational aids of front-of-package (FOP) nutrition labels (color code and text descriptors) on attentional capture and consumers' understanding of nutritional information. A full factorial design was used to assess the influence of color code and text descriptors using visual search and eye tracking. Ten trained assessors participated in the visual search study and 54 consumers completed the eye-tracking study. In the visual search study, assessors were asked to indicate whether there was a label high in fat within sets of mayonnaise labels with different FOP labels. In the eye-tracking study, assessors answered a set of questions about the nutritional content of labels. The researchers used logistic regression to evaluate the influence of interpretational aids of FOP nutrition labels on the percentage of correct answers. Analyses of variance were used to evaluate the influence of the studied variables on attentional measures and participants' response times. Response times were significantly higher for monochromatic FOP labels compared with color-coded ones (3,225 vs 964 ms; P < .001), which suggests that color codes increase attentional capture. The highest number and duration of fixations and visits were recorded on labels that did not include color codes or text descriptors (P < .05). The lowest percentage of incorrect answers was observed when the nutrient level was indicated using color code and text descriptors (P < .05). The combination of color codes and text descriptors seems to be the most effective alternative to increase attentional capture and understanding of nutritional information. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  3. Normative Database and Color-code Agreement of Peripapillary Retinal Nerve Fiber Layer and Macular Ganglion Cell-inner Plexiform Layer Thickness in a Vietnamese Population.

    PubMed

    Perez, Claudio I; Chansangpetch, Sunee; Thai, Andy; Nguyen, Anh-Hien; Nguyen, Anwell; Mora, Marta; Nguyen, Ngoc; Lin, Shan C

    2018-06-05

    Evaluate the distribution and the color probability codes of the peripapillary retinal nerve fiber layer (RNFL) and macular ganglion cell-inner plexiform layer (GCIPL) thickness in a healthy Vietnamese population and compare them with the original color-codes provided by the Cirrus spectral domain OCT. Cross-sectional study. We recruited non-glaucomatous Vietnamese subjects and constructed a normative database for peripapillary RNFL and macular GCIPL thickness. The probability color-codes for each decade of age were calculated. We evaluated the agreement with Kappa coefficient (κ) between OCT color probability codes with Cirrus built-in original normative database and the Vietnamese normative database. 149 eyes of 149 subjects were included. The mean age of enrollees was 60.77 (±11.09) years, with a mean spherical equivalent of +0.65 (±1.58) D and mean axial length of 23.4 (±0.87) mm. Average RNFL thickness was 97.86 (±9.19) microns and average macular GCIPL was 82.49 (±6.09) microns. Agreement between original and adjusted normative database for RNFL was fair for average and inferior quadrant (κ=0.25 and 0.2, respectively); and good for other quadrants (range: κ=0.63-0.73). For macular GCIPL κ agreement ranged between 0.39 and 0.69. After adjusting with the normative Vietnamese database, the percent of yellow and red color-codes increased significantly for peripapillary RNFL thickness. Vietnamese population has a thicker RNFL in comparison with Cirrus normative database. This leads to a poor color-code agreement in average and inferior quadrant between the original and adjusted database. These findings should encourage to create a peripapillary RNFL normative database for each ethnicity.

  4. Near-Infrared Coloring via a Contrast-Preserving Mapping Model.

    PubMed

    Chang-Hwan Son; Xiao-Ping Zhang

    2017-11-01

    Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.

  5. Visualizing spatiotemporal pulse propagation: first-order spatiotemporal couplings in laser pulses.

    PubMed

    Rhodes, Michelle; Guang, Zhe; Pease, Jerrold; Trebino, Rick

    2017-04-10

    Even though a general theory of first-order spatiotemporal couplings exists in the literature, it is often difficult to visualize how these distortions affect laser pulses. In particular, it is difficult to show the spatiotemporal phase of pulses in a meaningful way. Here, we propose a general solution to plotting the electric fields of pulses in three-dimensional space that intuitively shows the effects of spatiotemporal phases. The temporal phase information is color-coded using spectrograms and color response functions, and the beam is propagated to show the spatial phase evolution. Using this plotting technique, we generate two- and three-dimensional images and movies that show the effects of spatiotemporal couplings.

  6. Visualizing spatiotemporal pulse propagation: first-order spatiotemporal couplings in laser pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, Michelle; Guang, Zhe; Pease, Jerrold

    2017-04-06

    Even though a general theory of first-order spatiotemporal couplings exists in the literature, it is often difficult to visualize how these distortions affect laser pulses. In particular, it is difficult to show the spatiotemporal phase of pulses in a meaningful way. We propose a general solution to plotting the electric fields of pulses in three-dimensional space that intuitively shows the effects of spatiotemporal phases. The temporal phase information is color-coded using spectrograms and color response functions, and the beam is propagated to show the spatial phase evolution. In using this plotting technique, we generate two- and three-dimensional images and moviesmore » that show the effects of spatiotemporal couplings.« less

  7. Context of Carbonate Rocks in Heavily Eroded Martian Terrain

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The color coding on this composite image of an area about 20 kilometers (12 miles) wide on Mars is based on infrared spectral information interpreted as evidence of various minerals present. Carbonate, which is indicative of a wet and non-acidic history, occurs in very small patches of exposed rock appearing green in this color representation, such as near the lower right corner.

    The scene is heavily eroded terrain to the west of a small canyon in the Nili Fossae region of Mars. It was one of the first areas where researchers on the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) science team detected carbonate in Mars rocks. The spectral information comes from infrared imaging by CRISM, one of six science instruments on NASA's Mars Reconnaissance Orbiter. That coloring is overlaid on a grayscale image from the same orbiter's Context Camera.

    The uppermost capping rock unit (purple) is underlain successively by banded olivine-bearing rocks (yellow) and rocks bearing iron-magnesium smectite clay (blue). Where the olivine is a greenish hue, it has been partially altered by interaction with water. The carbonate and olivine occupy the same level in the stratigraphy, and it is thought that the carbonate formed by aqueous alteration of olivine. The channel running from upper left to lower right through the image and eroding into the layers of bedrock testifies to the past presence of water in this region. That some of the channels are closely associated with carbonate (lower right) indicates that waters interacting with the carbonate were neutral to alkaline because acidic waters would have dissolved the carbonate.

    Information for the color coding came from CRISM images catalogued as FRT0000B438, FRT0000A4FC, and FRT00003E12. This composite was made using 2.38-micrometer-wavelenghth data as red, 1.80 micrometer as green and 1.15 micrometer as blue.

    The base black-and-white image, acquired at a resolution of 5 meters (16 feet) per pixel, is catalogued as CTX P03_002176_2024_XI_22N283W_070113 by the Context Camera science team.

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Reconnaissance Orbiter for the NASA Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The Johns Hopkins University Applied Physics Laboratory led the effort to build the CRISM instrument and operates CRISM in coordination with an international team of researchers from universities, government and the private sector. Malin Space Science Systems, San Diego, provided and operates the Context Camera.

  8. SRTM Colored Height and Shaded Relief: Pinon Canyon region, Colorado

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Erosional features are prominent in this view of southern Colorado taken by the Shuttle Radar Topography Mission (SRTM). The area covers about 20,000 square kilometers and is located about 50 kilometers south of Pueblo, Colorado. The prominent mountains near the left edge of the image are the Spanish Peaks, remnants of a 20 million year old volcano. Rising 2,100 meters (7,000 ft) above the plains to the east, these igneous rock formations with intrusions of eroded sedimentary rock historically served as guiding landmarks for travelers on the Mountain Branch of the Santa Fe Trail.

    Near the center of the image is the Pinon Canyon Maneuver Site, a training area for soldiers of the U.S. Army from nearby Fort Carson. The site supports a diverse ecosystem with large numbers of big and small game, fisheries, non-game wildlife, forest, range land and mineral resources. It is bounded on the east by the dramatic topography of the Purgatoire River Canyon, a 100 meter (328 foot) deep scenic red canyon with flowing streams, sandstone formations, and exposed geologic processes.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction. Southern slopes appear bright and northern slopes appear dark. Color coding is directly related to topographic height, with blue and green at the lower elevations, rising through yellow and brown to white at the highest elevations.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR)that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.

    Size: 177.8 x 111.3 kilometers ( 110.5 x 69.2 miles) Location: 37.5 deg. North lat., 104 deg. West lon. Orientation: North toward the top Image Data: Shaded and colored SRTM elevation model Original Data Resolution: SRTM 1 arcsecond (30 meters or 99 feet) Date Acquired: February 2000

  9. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  10. 7 CFR 1755.200 - RUS standard for splicing copper and fiber optic cables.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Color coded plastic tie wraps shall be placed loosely around each binder group of cables before splicing... conform to the same color designations as the binder ribbons. Twisted wire pigtails shall not be used to identify binder groups due to potential transmission degradation. (ii) The standard insulation color code...

  11. Anticounterfeiting Quick Response Code with Emission Color of Invisible Metal-Organic Frameworks as Encoding Information.

    PubMed

    Wang, Yong-Mei; Tian, Xue-Tao; Zhang, Hui; Yang, Zhong-Rui; Yin, Xue-Bo

    2018-06-21

    Counterfeiting is a global epidemic that is compelling the development of new anticounterfeiting strategy. Herein, we report a novel multiple anticounterfeiting encoding strategy of invisible fluorescent quick response (QR) codes with emission color as information storage unit. The strategy requires red, green, and blue (RGB) light-emitting materials for different emission colors as encrypting information, single excitation for all of the emission for practicability, and ultraviolet (UV) excitation for invisibility under daylight. Therefore, RGB light-emitting nanoscale metal-organic frameworks (NMOFs) are designed as inks to construct the colorful light-emitting boxes for information encrypting, while three black vertex boxes were used for positioning. Full-color emissions are obtained by mixing the trichromatic NMOFs inks through inkjet printer. The encrypting information capacity is easily adjusted by the number of light-emitting boxes with the infinite emission colors. The information is decoded with specific excitation light at 275 nm, making the QR codes invisible under daylight. The composition of inks, invisibility, inkjet printing, and the abundant encrypting information all contribute to multiple anticounterfeiting. The proposed QR codes pattern holds great potential for advanced anticounterfeiting.

  12. Introduction to Color Imaging Science

    NASA Astrophysics Data System (ADS)

    Lee, Hsien-Che

    2005-04-01

    Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.

  13. Gulf Coast, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2

    The topography of the Gulf Coast states is well shown in this color-coded shaded relief map generated with data from the Shuttle Radar Topography Mission. The image on the top (see Figure 1) is a standard view showing southern Louisiana, Mississippi, Alabama and the panhandle of Florida. Green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    For the view on the bottom (see Figure 2), elevations below 10 meters (33 feet) above sea level have been colored light blue. These low coastal elevations are especially vulnerable to flooding associated with storm surges. Planners can use data like these to predict which areas are in the most danger and help develop mitigation plans in the event of particular flood events.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Science Mission Directorate, Washington, D.C.

    Location: 31 degrees north latitude, 88 degrees west longitude Orientation: North toward the top, Mercator projection Size: 702 by 433 kilometers (435 by 268 miles) Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  14. Southern Florida, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The very low topography of southern Florida is evident in this color-coded shaded relief map generated with data from the Shuttle Radar Topography Mission. The image on the left is a standard view, with the green colors indicating low elevations, rising through yellow and tan, to white at the highest elevations. In this exaggerated view even those highest elevations are only about 60 meters (197 feet) above sea level.

    For the view on the right, elevations below 5 meters (16 feet) above sea level have been colored dark blue, and lighter blue indicates elevations below 10 meters (33 feet). This is a dramatic demonstration of how Florida's low topography, especially along the coastline, make it especially vulnerable to flooding associated with storm surges. Planners can use data like these to predict which areas are in the most danger and help develop mitigation plans in the event of particular flood events.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Science Mission Directorate, Washington, D.C.

    Location: 27 degrees north latitude, 81 degrees west longitude Orientation: North toward the top, Mercator projection Size: 397 by 445 kilometers (246 by 276 miles) Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  15. Effects of target and distractor saturations on the cognitive performance of an integrated display interface

    NASA Astrophysics Data System (ADS)

    Xue, Chengqi; Li, Jing; Wang, Haiyan; Niu, Yafeng

    2015-01-01

    Color coding is often used to enhance decision quality in complex man-machine interfaces of integrated display systems. However, people are easily distracted by irrelevant colors and by the numerous data points and complex structures in the interface. Although an increasing number of studies are seriously focusing on the problem of achieving efficient color coding, few are able to determine the effects of target and distractor saturations on cognitive performance. To study the performances of target colors among distractors, a systematic experiment is conducted to assess the influence of high and low saturated targets on cognitive performance, and the affecting extent of different saturated distractors of homogeneous colors on targets. According to the analysis of the reaction time through the non-parametric statistical method, a calculation method of the cognitive performance of each color is proposed. Based on the calculation of the color differences and the accumulation of the reaction times, it is shown that with the different saturated distractors of homogeneous colors, the high saturated yellow targets perform better than the low saturated ones, and the green and blue targets have moderate performances. When searching for a singleton target placed on a black background, the color difference between the target and the distractor should be more than 20Δ E*ab units in the yellow saturation coding, whereas the color difference should be more than 40Δ E*ab units in the blue and green saturation coding. In addition, as regards saturation coding, the influence of the color difference between the target and the background on cognitive performance is greater than that of the color difference between the target and the distractor. Seemingly, the hue attribute determines whether the saturation difference between the target and the distractor affects the cognitive performance. Based on the experimental results, the simulation design of the instrument dials in a flight situation awareness interface is completed and tested. Simulation results show the feasibility of the method of choosing the target and distractor colors, and the proposed research provides the instruction for the color saturation design of the interface.

  16. Shenandoah National Park, Virginia, Shaded Relief with Height as Color

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Shenandoah National Park lies astride part of the Blue Ridge Mountains, which form the southeastern range of the greater Appalachian Mountains in Virginia. The park is well framed by this one-degree of latitude (38-39 north) by one-degree of longitude (78-79 west) cell of Shuttle Radar Topography Mission data, and it appears here as the most prominent ridge trending diagonally across the scene. Skyline Drive, a 169-kilometer (105-mile) road that winds along the crest of the mountains through the length the park, provides vistas of the surrounding landscape. The Shenandoah River flows through the valley to the west, with Massanutten Mountain standing between the river's north and south forks. Unusually pronounced meanders of both river forks are very evident near the top center of this scene. Massanutten Mountain itself is an unusually distinctive landform also, consisting of highly elongated looping folds of sedimentary rock. The rolling Piedmont country lies to the southeast of the park, with Charlottesville located at the bottom center of the scene.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the north-south direction. Northern slopes appear bright and southern slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red, and magenta, to bluish-white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on February 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, DC.

    Size: 111 by 87 kilometers (69 by 54 miles) Location: 38-39 degrees North latitude, 78-79 degrees West longitude Orientation: North toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 2000

  17. SRTM Colored Height and Shaded Relief: Near Zapala, Argentina

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Topographic data provided by the Shuttle Radar Topography Mission can provide many clues to geologic history and processes. This view of an area southwest of Zapala, Argentina, shows a wide diversity of geologic features. The highest peaks (left) appear to be massive (un-layered)crystalline rocks, perhaps granites. To their right (eastward) are tilted and eroded layered rocks, perhaps old lava flows, forming prominent ridges. Farther east and south, more subtle and curvilinear ridges show that the rock layers have not only been tilted but also folded. At the upper right, plateaus that cap the underlying geologic complexities are more recent lava flows - younger than the folding, but older than the current erosional pattern. Landforms in the southeast (lower right) and south-central areas appear partially wind sculpted.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the north-south direction. Northern slopes appear bright and southern slopes appear dark, as would be the case at noon at this latitude in the southern hemisphere. Color-coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red, and magenta, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard Space Shuttle Endeavour, launched on February 11,2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on Space Shuttle Endeavour in 1994. Shuttle Radar Topography Mission was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration, the National Imagery and Mapping Agency of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.

    Size: 45.9 by 36.0 kilometers ( 28.5 by 22.3 miles) Location: 39.4 deg. South lat., 70.3 deg. West lon. Orientation: North toward the top Image Data: Shaded and colored Shuttle Radar Topography Mission elevation model Date Acquired: February 2000

  18. SRTM Perspective of Colored Height and Shaded Relief Laguna Mellquina, Andes Mountains, Argentina

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This depiction of an area south of San Martin de Los Andes, Argentina, is the first Shuttle Radar Topography Mission (SRTM)view of the Andes Mountains, the tallest mountain chain in the western hemisphere. This particular site does not include the higher Andes peaks, but it does include steep-sided valleys and other distinctive landforms carved by Pleistocene glaciers. Elevations here range from about 700 to 2,440 meters (2,300 to 8,000 feet). This region is very active tectonically and volcanically, and the landforms provide a record of the changes that have occurred over many thousands of years. Large lakes fill the broad mountain valleys, and the spectacular scenery here makes this area a popular resort destination for Argentinians.

    Three visualization methods were combined to produce this image: shading, color coding of topographic height and a perspective view. The shade image was derived by computing topographic slope in the north-south direction. Northern slopes appear bright and southern slopes appear dark, as would be the case at noon at this latitude in the southern hemisphere. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red, and magenta, to white at the highest elevations. The perspective is toward the west, 20 degrees off horizontal with 2X vertical exaggeration. The back (west) edge of the data set forms a false skyline within the Andes Range.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission aboard Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 55.0 x 37.2 kilometers ( 34.1 x 23.1 miles) Location: 40.4 deg. South lat., 71.3 deg. West lon. Orientation: West toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 2000

  19. 2D bifurcations and Newtonian properties of memristive Chua's circuits

    NASA Astrophysics Data System (ADS)

    Marszalek, W.; Podhaisky, H.

    2016-01-01

    Two interesting properties of Chua's circuits are presented. First, two-parameter bifurcation diagrams of Chua's oscillatory circuits with memristors are presented. To obtain various 2D bifurcation images a substantial numerical effort, possibly with parallel computations, is needed. The numerical algorithm is described first and its numerical code for 2D bifurcation image creation is available for free downloading. Several color 2D images and the corresponding 1D greyscale bifurcation diagrams are included. Secondly, Chua's circuits are linked to Newton's law φ ''= F(t,φ,φ')/m with φ=\\text{flux} , constant m > 0, and the force term F(t,φ,φ') containing memory terms. Finally, the jounce scalar equations for Chua's circuits are also discussed.

  20. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  1. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  2. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  3. Railroad Valley, Nevada

    NASA Image and Video Library

    2002-02-01

    Information from images of Railroad Valley, Nevada captured on August 17, 2001 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) may provide a powerful tool for monitoring crop health and maintenance procedures. These images cover an area of north central Nevada. The top image shows irrigated fields, with healthy vegetation in red. The middle image highlights the amount of vegetation. The color code shows highest vegetation content in red, orange, yellow, green, blue, and purple and the lowest in black. The final image is a thermal infrared channel, with warmer temperatures in white and colder in black. In the thermal image, the northernmost and westernmost fields are markedly colder on their northwest areas, even though no differences are seen in the visible image or the second, Vegetation Index image. This can be attributed to the presence of excess water, which can lead to crop damage. http://photojournal.jpl.nasa.gov/catalog/PIA03463

  4. Objective breast tissue image classification using Quantitative Transmission ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Malik, Bilal; Klock, John; Wiskin, James; Lenox, Mark

    2016-12-01

    Quantitative Transmission Ultrasound (QT) is a powerful and emerging imaging paradigm which has the potential to perform true three-dimensional image reconstruction of biological tissue. Breast imaging is an important application of QT and allows non-invasive, non-ionizing imaging of whole breasts in vivo. Here, we report the first demonstration of breast tissue image classification in QT imaging. We systematically assess the ability of the QT images’ features to differentiate between normal breast tissue types. The three QT features were used in Support Vector Machines (SVM) classifiers, and classification of breast tissue as either skin, fat, glands, ducts or connective tissue was demonstrated with an overall accuracy of greater than 90%. Finally, the classifier was validated on whole breast image volumes to provide a color-coded breast tissue volume. This study serves as a first step towards a computer-aided detection/diagnosis platform for QT.

  5. Ambae Island, Vanuatu (South Pacific)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The recently active volcano Mt. Manaro is the dominant feature in this shaded relief image of Ambae Island, part of the Vanuatu archipelago located 1400 miles northeast of Sydney, Australia. About 5000 inhabitants, half the island's population, were evacuated in early December from the path of a possible lahar, or mud flow, when the volcano started spewing clouds of steam and toxic gases 10,000 feet into the atmosphere.

    Last active in 1996, the 1496 meter (4908 ft.) high Hawaiian-style basaltic shield volcano features two lakes within its summit caldera, or crater. The ash and gas plume is actually emerging from a vent at the center of Lake Voui (at left), which was formed approximately 425 years ago after an explosive eruption.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise,Washington, D.C.

    Location: 15.4 degree south latitude, 167.9 degrees east longitude Orientation: North toward the top, Mercator projection Size: 36.8 by 27.8 kilometers (22.9 by 17.3 miles) Image Data: shaded and colored SRTM elevation model Date Acquired: February 2000

  6. Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea

    NASA Astrophysics Data System (ADS)

    Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.

    2018-04-01

    Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.

  7. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  8. Structure of the midcontinent basement. Topography, gravity, seismic, and remote sensing

    NASA Technical Reports Server (NTRS)

    Guinness, E. A.; Strebeck, J. W.; Arvidson, R. E.; Scholz, K.; Davies, G. F.

    1981-01-01

    Some 600,000 discrete Bouguer gravity estimates of the continental United States were spatially filtered to produce a continuous tone image. The filtered data were also digitally painted in color coded form onto a shaded relief map. The resultant image is a colored shaded relief map where the hue and saturation of a given image element is controlled by the value of the Bouguer anomaly. Major structural features (e.g., midcontinent gravity high) are readily discernible in these data, as are a number of subtle and previously unrecognized features. A linear gravity low that is approximately 120 to 150 km wide extends from southeastern Nebraska, at a break in the midcontinent gravity high, through the Ozark Plateau, and across the Mississippi embayment. The low is also aligned with the Lewis and Clark lineament (Montana to Washington), forming a linear feature of approximately 2800 km in length. In southeastern Missouri the gravity low has an amplitude of 30 milligals, a value that is too high to be explained by simple valley fill by sedimentary rocks.

  9. Color-Biased Regions of the Ventral Visual Pathway Lie between Face- and Place-Selective Regions in Humans, as in Macaques

    PubMed Central

    Conway, Bevil R.; Kanwisher, Nancy G.

    2016-01-01

    The existence of color-processing regions in the human ventral visual pathway (VVP) has long been known from patient and imaging studies, but their location in the cortex relative to other regions, their selectivity for color compared with other properties (shape and object category), and their relationship to color-processing regions found in nonhuman primates remain unclear. We addressed these questions by scanning 13 subjects with fMRI while they viewed two versions of movie clips (colored, achromatic) of five different object classes (faces, scenes, bodies, objects, scrambled objects). We identified regions in each subject that were selective for color, faces, places, and object shape, and measured responses within these regions to the 10 conditions in independently acquired data. We report two key findings. First, the three previously reported color-biased regions (located within a band running posterior–anterior along the VVP, present in most of our subjects) were sandwiched between face-selective cortex and place-selective cortex, forming parallel bands of face, color, and place selectivity that tracked the fusiform gyrus/collateral sulcus. Second, the posterior color-biased regions showed little or no selectivity for object shape or for particular stimulus categories and showed no interaction of color preference with stimulus category, suggesting that they code color independently of shape or stimulus category; moreover, the shape-biased lateral occipital region showed no significant color bias. These observations mirror results in macaque inferior temporal cortex (Lafer-Sousa and Conway, 2013), and taken together, these results suggest a homology in which the entire tripartite face/color/place system of primates migrated onto the ventral surface in humans over the course of evolution. SIGNIFICANCE STATEMENT Here we report that color-biased cortex is sandwiched between face-selective and place-selective cortex on the bottom surface of the brain in humans. This face/color/place organization mirrors that seen on the lateral surface of the temporal lobe in macaques, suggesting that the entire tripartite system is homologous between species. This result validates the use of macaques as a model for human vision, making possible more powerful investigations into the connectivity, precise neural codes, and development of this part of the brain. In addition, we find substantial segregation of color from shape selectivity in posterior regions, as observed in macaques, indicating a considerable dissociation of the processing of shape and color in both species. PMID:26843649

  10. Internet Color Imaging

    DTIC Science & Technology

    2000-07-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1348 TITLE: Internet Color Imaging DISTRIBUTION: Approved for public...Paper Internet Color Imaging Hsien-Che Lee Imaging Science and Technology Laboratory Eastman Kodak Company, Rochester, New York 14650-1816 USA...ABSTRACT The sharing and exchange of color images over the Internet pose very challenging problems to color science and technology . Emerging color standards

  11. The Vortex Formerly Known as White Oval BA: Temperature Structure, CloudProperties and Dynamical Simulation

    NASA Astrophysics Data System (ADS)

    Orton, Glenn S.; Yanamandra-Fisher, P. A.; Parrish, P. D.; Mousis, O.; Pantin, E.; Fuse, T.; Fujiyoshi, T.; Simon-Miller, A.; Morales-Juberias, R.; Tollestrup, E.; Connelley, M.; Trujillo, C.; Hora, J.; Irwin, P.; Fletcher, L.; Hill, D.; Kollmansberger, S.

    2006-09-01

    White Oval BA, constituted from 3 predecessor vortices (known as Jupiter's "classical" White Ovals) after successive mergers in 1998 and 2000, became second-largest vortex in the atmosphere of Jupiter (and possibly the solar system) at the time of its formation. While it continues in this distinction,it required a name change after a 2005 December through 2006 February transformation which made it appear visually the same color as the Great Red Spot. Our campaign to understand the changes involved examination of the detailed color and wind field using Hubble Space Telescope instrumentation on several orbits in April. The field of temperatures, ammonia distribution and clouds were also examined using the mid-infrared VISIR camera/spectrometer on ESO's 8.2-m Very Large Telescope, the NASA Infrared telescope with the mid-infrared MIRSI instrument and the refurbished near-infrared facility camera NSFCam2. High-resolution images of the Oval were made before the color change with the COMICS mid-infrared facility on the 8.2-m Subaru telescope.We are using these images, togther with images acquired at the IRTF and with the Gemini/North NIRI near-infrared camera between January, 2005, and August, 2006, to characterize the extent to which changes in storm strength (vorticity, postive vertical motion) influenced (i) the depth from which colored cloud particles may have been "dredged up" from depth or (ii) the altitude to which particles may have been lofted and subject to high-energy UV radiation which caused a color change, as alternative explanations for the phenomenon. Clues to this will provide clues to the chemistry of Jupiter's cloud system and its well-known colors in general. The behavior of Oval BA, its interaction with the Great Red Spot in particular,are also being compared with dynamical models run with the EPIC code.

  12. Reds, Greens, Yellows Ease the Spelling Blues.

    ERIC Educational Resources Information Center

    Irwin, Virginia

    1971-01-01

    This document reports on a color-coding innovation designed to improve the spelling ability of high school seniors. This color-coded system is based on two assumptions: that color will appeal to the students and that there are three principal reasons for misspelling. Two groups were chosen for the experiments. A basic list of spelling demons was…

  13. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  14. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  15. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  16. Use of Color-Coded Food Photographs for Meal Planning by Adults with Mental Retardation.

    ERIC Educational Resources Information Center

    Gines, Deon J.; And Others

    1990-01-01

    Ten adults with mild mental retardation used color-coded food photographs and meal code cards to plan nutritionally balanced meals. Subjects spent an average of nine minutes to plan three meals. Errors, which were primarily omissions, occurred mostly in food groups requiring four servings daily. (JDD)

  17. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  18. 78 FR 59265 - FD&C Yellow No. 5; Exemption From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-26

    ... (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). B. How.... 5 is a FDA permanently listed color additive used in food, drugs and cosmetics, including drugs and cosmetics for the eye area. FDA's color additive evaluation included the consideration of an extensive set...

  19. Utilizing typical color appearance models to represent perceptual brightness and colorfulness for digital images

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao

    2016-12-01

    This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.

  20. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.

  1. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.

  2. Shaded Relief with Height as Color and Landsat, Yucatan Peninsula, Mexico

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The top picture is a shaded relief image of the northwest corner of Mexico's Yucatan Peninsula generated from Shuttle Radar Topography Mission (SRTM) data, and shows a subtle, but unmistakable, indication of the Chicxulub impact crater. Most scientists now agree that this impact was the cause of the Cretatious-Tertiary Extinction, the event 65 million years ago that marked the sudden extinction of the dinosaurs as well as the majority of life on Earth. The pattern of the crater's rim is marked by a trough, the darker green semicircular line near the center of the picture. This trough is only about 3 to 5 meters (10 - 15 feet) deep and is about 5 km (3 miles) wide; so subtle that if you walked across it you probably would not notice it. It is the surface expression of the buried crater's outer boundary. Scientists believe the impact, which was centered just off the coast in the Caribbean, altered the subsurface rocks such that the overlying limestone sediments, which formed later and erode very easily, would preferentially erode along the crater rim. This formed the trough as well as numerous sinkholes (called cenotes) which are visible as small circular depressions.

    The bottom picture is the same area viewed by the Landsat satellite, and was made by displaying the Thematic Mapper's Band 7 (mid-infrared), Band 4 (near-infrared) and Band 2 (green) as red, green and blue. These colors were chosen to maximize the contrast between different vegetation and land cover types, with native vegetation and cultivated land showing as green, yellow and magenta, and urban areas as white. The circular white area near the center of the image is Merida, a city of about 720,000 population. Notice that in the SRTM image, which shows only topography, the city is not visible, while in the Landsat image, which does not show elevations, the trough is not visible.

    Two visualization methods were combined to produce the SRTM image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 261 by 162 kilometers (162 by 100 miles) Location: 20.8 degrees North latitude, 89.3 degrees West longitude Orientation: North toward the top, Mercator projection Image Data: shaded and colored SRTM elevation model Original Data Resolution: SRTM 1 arcsecond (about 30 meters or 98 feet) Date Acquired: February 2000

  3. Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma

    NASA Astrophysics Data System (ADS)

    Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira

    2013-02-01

    A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.

  4. Influence of imaging resolution on color fidelity in digital archiving.

    PubMed

    Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari

    2015-11-01

    Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.

  5. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  6. Image subregion querying using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2002-01-01

    A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.

  7. Filling in the retinal image

    NASA Technical Reports Server (NTRS)

    Larimer, James; Piantanida, Thomas

    1990-01-01

    The optics of the eye form an image on a surface at the back of the eyeball called the retina. The retina contains the photoreceptors that sample the image and convert it into a neural signal. The spacing of the photoreceptors in the retina is not uniform and varies with retinal locus. The central retinal field, called the macula, is densely packed with photoreceptors. The packing density falls off rapidly as a function of retinal eccentricity with respect to the macular region and there are regions in which there are no photoreceptors at all. The retinal regions without photoreceptors are called blind spots or scotomas. The neural transformations which convert retinal image signals into percepts fills in the gaps and regularizes the inhomogeneities of the retinal photoreceptor sampling mosaic. The filling-in mechamism plays an important role in understanding visual performance. The filling-in mechanism is not well understood. A systematic collaborative research program at the Ames Research Center and SRI in Menlo Park, California, was designed to explore this mechanism. It was shown that the perceived fields which are in fact different from the image on the retina due to filling-in, control some aspects of performance and not others. Researchers have linked these mechanisms to putative mechanisms of color coding and color constancy.

  8. Multiple Auto-Adapting Color Balancing for Large Number of Images

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-04-01

    This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.

  9. Laser-Sharp Jet Splits Water

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A jet of gas firing out of a very young star can be seen ramming into a wall of material in this infrared image from NASA's Spitzer Space Telescope.

    The young star, called HH 211-mm, is cloaked in dust and can't be seen. But streaming away from the star are bipolar jets, color-coded blue in this view. The pink blob at the end of the jet to the lower left shows where the jet is hitting a wall of material. The jet is hitting the wall so hard that shock waves are being generated, which causes ice to vaporize off dust grains. The shock waves are also heating material up, producing energetic ultraviolet radiation. The ultraviolet radiation then breaks the water vapor molecules apart.

    The red color at the end of the lower jet represents shock-heated iron, sulfur and dust, while the blue color in both jets denotes shock-heated hydrogen molecules.

    HH 211-mm is part of a cluster of about 300 stars, called IC 348, located 1,000 light-years away in the constellation Perseus.

    This image is a composite of infrared data from Spitzer's infrared array camera and its multiband imaging photometer. Light with wavelengths of 3.6 and 4.5 microns is blue; 8-micron-light is green; and 24-micron light is red.

  10. France, Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This image of France was generated with data from the Shuttle Radar Topography Mission (SRTM). For this broad view the resolution of the data was reduced to 6 arcseconds (about 185 meters north-south and 127 meters east-west), resampled to a Mercator projection, and the French border outlined. Even at this decreased resolution the variety of landforms comprising the country is readily apparent.

    The upper central part of this scene is dominated by the Paris Basin, which consists of a layered sequence of sedimentary rocks. Fertile soils over much of the area make good agricultural land. The Normandie coast to the upper left is characterized by high, chalk cliffs, while the Brittany coast (the peninsula to the left) is highly indented where deep valleys were drowned by the sea, and the Biscay coast to the southwest is marked by flat, sandy beaches.

    To the south, the Pyrenees form a natural border between France and Spain, and the south-central part of the country is dominated by the ancient Massif Central. Subject to volcanism that has only subsided in the last 10,000 years, these central mountains are separated from the Alps by the north-south trending Rhone River Basin.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise,Washington, D.C.

    Location: 42 to 51.5 degrees North latitude, 5.5 West to 8 degrees East longitude Orientation: North toward the top, Mercator projection Image Data: shaded and colored SRTM elevation model Original Data Resolution: SRTM 1 arcsecond (about 30 meters or 98 feet) Date Acquired: February 2000

  11. Alpine Fault, New Zealand, SRTM Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Alpine fault runs parallel to, and just inland of, much of the west coast of New Zealand's South Island. This view was created from the near-global digital elevation model produced by the Shuttle Radar Topography Mission (SRTM) and is almost 500 kilometers (just over 300 miles) wide. Northwest is toward the top. The fault is extremely distinct in the topographic pattern, nearly slicing this scene in half lengthwise.

    In a regional context, the Alpine fault is part of a system of faults that connects a west dipping subduction zone to the northeast with an east dipping subduction zone to the southwest, both of which occur along the juncture of the Indo-Australian and Pacific tectonic plates. Thus, the fault itself constitutes the major surface manifestation of the plate boundary here. Offsets of streams and ridges evident in the field, and in this view of SRTM data, indicate right-lateral fault motion. But convergence also occurs across the fault, and this causes the continued uplift of the Southern Alps, New Zealand's largest mountain range, along the southeast side of the fault.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast (image top to bottom) direction, so that northwest slopes appear bright and southeast slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 495 kilometers (307 miles) by 162 kilometers (100 miles) Location: 43.2 degrees South latitude, 170.5 degrees East longitude Orientation: Northwest toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 2000

  12. Davenport Ranges, Northern Territory, Australia, SRTM Shaded Relief and Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Davenport Ranges of central Australia have been inferred to be among the oldest persisting landforms on Earth, founded on the belief that the interior of Australia has been tectonically stable for at least 700 million years. New rock age dating techniques indicate that substantial erosion has probably occurred over that time period and that the landforms are not nearly that old, but landscape evolution certainly occurs much slower here (at least now) than is typical across Earth's surface.

    Regardless of their antiquity, the Davenport Ranges exhibit a striking landform pattern as shown in this display of elevation data from the Shuttle Radar Topography Mission (SRTM). Quartzites and other erosion resistant strata form ridges within anticlinal (arched up) and synclinal (arched down) ovals and zigzags. These structures, if not the landforms, likely date back at least hundreds of millions of years, to a time when tectonic forces were active. Maximum local relief is only about 60 meters (about 200 feet), which is enough to contrast greatly with the extremely low relief surrounding terrain.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northeast-southwest (image top to bottom) direction, so that northeast slopes appear bright and southwest slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 270 kilometers (168 miles) by 145 kilometers (90 miles) Location: 20.9 degrees South latitude, 134.9 degrees East longitude Orientation: Northeast toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 2000

  13. Color standardization in whole slide imaging using a color calibration slide

    PubMed Central

    Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako

    2014-01-01

    Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739

  14. Automatic color preference correction for color reproduction

    NASA Astrophysics Data System (ADS)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  15. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  16. Transgenic nude mice ubiquitously expressing fluorescent proteins for color-coded imaging of the tumor microenvironment.

    PubMed

    Hoffman, Robert M

    2014-01-01

    We have developed a transgenic green fluorescent protein (GFP) nude mouse with ubiquitous GFP expression. The GFP nude mouse was obtained by crossing nontransgenic nude mice with the transgenic C57/B6 mouse in which the β-actin promoter drives GFP expression in essentially all tissues. In the adult mice, many organs brightly expressed GFP, including the spleen, heart, lungs, spleen, pancreas, esophagus, stomach, and duodenum as well as the circulatory system. The liver expressed GFP at a lesser level. The red fluorescent protein (RFP) transgenic nude mouse was obtained by crossing non-transgenic nude mice with the transgenic C57/B6 mouse in which the beta-actin promoter drives RFP (DsRed2) expression in essentially all tissues. In the RFP nude mouse, the organs all brightly expressed RFP, including the heart, lungs, spleen, pancreas, esophagus, stomach, liver, duodenum, the male and female reproductive systems; brain and spinal cord; and the circulatory system, including the heart, and major arteries and veins. The skinned skeleton highly expressed RFP. The bone marrow and spleen cells were also RFP positive. The cyan fluorescent protein (CFP) nude mouse was developed by crossing nontransgenic nude mice with the transgenic CK/ECFP mouse in which the β-actin promoter drives expression of CFP in almost all tissues. In the CFP nude mice, the pancreas and reproductive organs displayed the strongest fluorescence signals of all internal organs, which vary in intensity. The GFP, RFP, and CFP nude mice when transplanted with cancer cells of another color are powerful models for color-coded imaging of the tumor microenvironment (TME) at the cellular level.

  17. Can color-coded parametric maps improve dynamic enhancement pattern analysis in MR mammography?

    PubMed

    Baltzer, P A; Dietzel, M; Vag, T; Beger, S; Freiberg, C; Herzog, A B; Gajda, M; Camara, O; Kaiser, W A

    2010-03-01

    Post-contrast enhancement characteristics (PEC) are a major criterion for differential diagnosis in MR mammography (MRM). Manual placement of regions of interest (ROIs) to obtain time/signal intensity curves (TSIC) is the standard approach to assess dynamic enhancement data. Computers can automatically calculate the TSIC in every lesion voxel and combine this data to form one color-coded parametric map (CCPM). Thus, the TSIC of the whole lesion can be assessed. This investigation was conducted to compare the diagnostic accuracy (DA) of CCPM with TSIC for the assessment of PEC. 329 consecutive patients with 469 histologically verified lesions were examined. MRM was performed according to a standard protocol (1.5 T, 0.1 mmol/kgbw Gd-DTPA). ROIs were drawn manually within any lesion to calculate the TSIC. CCPMs were created in all patients using dedicated software (CAD Sciences). Both methods were rated by 2 observers in consensus on an ordinal scale. Receiver operating characteristics (ROC) analysis was used to compare both methods. The area under the curve (AUC) was significantly (p=0.026) higher for CCPM (0.829) than TSIC (0.749). The sensitivity was 88.5% (CCPM) vs. 82.8% (TSIC), whereas equal specificity levels were found (CCPM: 63.7%, TSIC: 63.0%). The color-coded parametric maps (CCPMs) showed a significantly higher DA compared to TSIC, in particular the sensitivity could be increased. Therefore, the CCPM method is a feasible approach to assessing dynamic data in MRM and condenses several imaging series into one parametric map. © Georg Thieme Verlag KG Stuttgart · New York.

  18. New Skeletal-Space-Filling Models

    ERIC Educational Resources Information Center

    Clarke, Frank H.

    1977-01-01

    Describes plastic, skeletal molecular models that are color-coded and can illustrate both the conformation and overall shape of small molecules. They can also be converted to space-filling counterparts by the additions of color-coded polystyrene spheres. (MLH)

  19. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  20. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  1. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    NASA Astrophysics Data System (ADS)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  2. How to identify up to 30 colors without training: color concept retrieval by free color naming

    NASA Astrophysics Data System (ADS)

    Derefeldt, Gunilla A. M.; Swartling, Tiina

    1994-05-01

    Used as a redundant code, color is shown to be advantageous in visual search tasks. It enhances attention, detection, and recall of information. Neuropsychological and neurophysiological findings have shown color and spatial perception to be interrelated functions. Studies on eye movements show that colored symbols are easier to detect and that eye fixations are more correctly directed to color-coded symbols. Usually between 5 and 15 colors have been found useful in classification tasks, but this umber can be increased to between 20 to 30 by careful selection of colors, and by a subject's practice with the identification task and familiarity with the particular colors. Recent neurophysiological findings concerning the language-concept connection in color suggest that color concept retrieval would be enhanced by free color naming or by the use of natural associations between color concepts and color words. To test this hypothesis, we had subjects give their own free associations to a set of 35 colors presented on a display. They were able to identify as many as 30 colors without training.

  3. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  4. Age discrimination among eruptives of Menengai Caldera, Kenya, using vegetation parameters from satellite imagery

    NASA Technical Reports Server (NTRS)

    Blodget, Herbert W.; Heirtzler, James R.

    1993-01-01

    Results are presented of an investigation to determine the degree to which digitally processed Landsat TM imagery can be used to discriminate among vegetated lava flows of different ages in the Menengai Caldera, Kenya. A selective series of five images, consisting of a color-coded Landsat 5 classification and four color composites, are compared with geologic maps. The most recent of more than 70 postcaldera flows within the caldera are trachytes, which are variably covered by shrubs and subsidiary grasses. Soil development evolves as a function of time, and as such supports a changing plant community. Progressively older flows exhibit the increasing dominance of grasses over bushes. The Landsat images correlated well with geologic maps, but the two mapped age classes could be further subdivided on the basis of different vegetation communities. It is concluded that field maps can be modified, and in some cases corrected by use of such imagery, and that digitally enhanced Landsat imagery can be a useful aid to field mapping in similar terrains.

  5. Improved Polyurethane Storage Tank Performance

    DTIC Science & Technology

    2014-06-30

    condition occurred if water overflowed from the tank vent prior to reaching 45 gallons. A spline curve was drawn around the perimeter of each image so...estimated footprint and height envelope was added for spatial reference. A spline color code key was developed, so that the progression of the tanks...Table 4.5.3). A standard flat plate platen was used for the double butt seams and some closing seams by one fabricator. The other utilized a “Slinky

  6. [Precision of digital impressions with TRIOS under simulated intraoral impression taking conditions].

    PubMed

    Yang, Xin; Sun, Yi-fei; Tian, Lei; Si, Wen-jie; Feng, Hai-lan; Liu, Yi-hong

    2015-02-18

    To evaluate the precision of digital impressions taken under simulated clinical impression taking conditions with TRIOS and to compare with the precision of extraoral digitalizations. Six #14-#17 epoxy resin dentitions with extracted #16 tooth preparations embedded were made. For each artificial dentition, (1)a silicone rubber impression was taken with individual tray, poured with type IV plaster,and digitalized with 3Shape D700 model scanner for 10 times; (2) fastened to a dental simulator, 10 digital impressions for each were taken with 3Shape TRIOS intraoral scanner. To assess the precision, best-fit algorithm and 3D comparison were conducted between repeated scan models pairwise by Geomagic Qualify 12.0, exported as averaged errors (AE) and color-coded diagrams. Non-parametric analysis was performed to compare the precisions of digital impressions and model images. The color-coded diagrams were used to show the deviations distributions. The mean of AE for digital impressions was 7.058 281 μm, which was greater than that of 4.092 363 μm for the model images (P<0.05). However, the means and medians of AE for digital impressions were no more than 10 μm, which meant that the consistency between the digital impressions was good. The deviations distribution was uniform in the model images,while nonuniform in the digital impressions with greater deviations lay mainly around the shoulders and interproximal surfaces. Digital impressions with TRIOS are of good precision and up to the clinical standard. Shoulders and interproximal surfaces scanning are more difficult.

  7. Qualitative evaluations and comparisons of six night-vision colorization methods

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul

    2013-05-01

    Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF < SM < SM-JHM < LUT < JHM < HM. It is anticipated that this work will provide a benchmark for NV colorization and for quantitative evaluation using an objective metric such as objective evaluation index (OEI).

  8. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1983-05-03

    Processing, White-Light Holography, Image Subtraction, Image Deblurring , Coherence Requirement, Apparent Transfer Function, Source Encoding, Signal...in this period, also demonstrated several color image processing capabilities. Among those are broadband color image deblurring and color image...Broadband Image Deblurring ..... ......... 6 2.5 Color Image Subtraction ............... 7 2.6 Rainbow Holographic Aberrations . . ..... 7 2.7

  9. Distance preservation in color image transforms

    NASA Astrophysics Data System (ADS)

    Santini, Simone

    1999-12-01

    Most current image processing systems work on color images, and color is a precious perceptual clue for determining image similarity. Working with color images, however, is not the sam thing as working with images taking values in a 3D Euclidean space. Not only are color spaces bounded, but the characteristics of the observer endow the space with a 'perceptual' metric that in general does not correspond to the metric naturally inherited from R3. This paper studies the problem of filtering color images abstractly. It begins by determining the properties of the color sum and color product operations such that he desirable properties of orthonormal bases will be preserved. The paper then defines a general scheme, based on the action of the additive group on the color space, by which operations that satisfy the required properties can be defined.

  10. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    NASA Astrophysics Data System (ADS)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  11. Digital image processing of Seabeam bathymetric data for structural studies of seamounts near the East Pacific Rise

    NASA Technical Reports Server (NTRS)

    Edwards, M. H.; Arvidson, R. E.; Guinness, E. A.

    1984-01-01

    The problem of displaying information on the seafloor morphology is attacked by utilizing digital image processing techniques to generate images for Seabeam data covering three young seamounts on the eastern flank of the East Pacific Rise. Errors in locations between crossing tracks are corrected by interactively identifying features and translating tracks relative to a control track. Spatial interpolation techniques using moving averages are used to interpolate between gridded depth values to produce images in shaded relief and color-coded forms. The digitally processed images clarify the structural control on seamount growth and clearly show the lateral extent of volcanic materials, including the distribution and fault control of subsidiary volcanic constructional features. The image presentations also clearly show artifacts related to both residual navigational errors and to depth or location differences that depend on ship heading relative to slope orientation in regions with steep slopes.

  12. MUNSELL COLOR ANALYSIS OF LANDSAT COLOR-RATIO-COMPOSITE IMAGES OF LIMONITIC AREAS IN SOUTHWEST NEW MEXICO.

    USGS Publications Warehouse

    Kruse, Fred A.

    1984-01-01

    Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.

  13. Effect of the atmosphere on the color coordinates of sunlit surfaces

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Viljoen, Johan W.

    2016-02-01

    Aerosol attenuation in the atmosphere has a relatively weak spectral variation compared to molecular absorption. However, the solar spectral irradiance differs considerably for the sun at high zenith angles versus the sun at low zenith angles. The perceived color of a sunlit object depends on the object's spectral reflectivity as well as the irradiance spectrum. The color coordinates of the sunlit object, hence also the color balance in a scene, shift with changes in the solar zenith angle. The work reported here does not claim accurate color measurement. With proper calibration mobile phones may provide reasonably accurate color measurement, but the mobile phones used for taking these pictures and videos are not scientific instruments and were not calibrated. The focus here is on the relative shift of the observed colors, rather than absolute color. The work in this paper entails the theoretical analysis of color coordinates of surfaces and how they change for different colored surfaces. Then follows three separate investigations: (1) Analysis of a number of detailed atmospheric radiative transfer code (Modtran) runs to show from the theory how color coordinates should change. (2) Analysis of a still image showing how the colors of two sample surfaces vary between sunlit and shaded areas. (3) Time lapse video recordings showing how the color coordinates of a few surfaces change as a function of time of day. Both the theoretical and experimental work shows distinct shifts in color as function of atmospheric conditions. The Modtran simulations demonstrate the effect from clear atmospheric conditions (no aerosol) to low visibility conditions (5 km visibility). Even under moderate atmospheric conditions the effect was surprisingly large. The experimental work indicated significant shifts during the diurnal cycle.

  14. Multiplex coherent anti-Stokes Raman scattering microspectroscopy of brain tissue with higher ranking data classification for biomedical imaging

    NASA Astrophysics Data System (ADS)

    Pohling, Christoph; Bocklitz, Thomas; Duarte, Alex S.; Emmanuello, Cinzia; Ishikawa, Mariana S.; Dietzeck, Benjamin; Buckup, Tiago; Uckermann, Ortrud; Schackert, Gabriele; Kirsch, Matthias; Schmitt, Michael; Popp, Jürgen; Motzkus, Marcus

    2017-06-01

    Multiplex coherent anti-Stokes Raman scattering (MCARS) microscopy was carried out to map a solid tumor in mouse brain tissue. The border between normal and tumor tissue was visualized using support vector machines (SVM) as a higher ranking type of data classification. Training data were collected separately in both tissue types, and the image contrast is based on class affiliation of the single spectra. Color coding in the image generated by SVM is then related to pathological information instead of single spectral intensities or spectral differences within the data set. The results show good agreement with the H&E stained reference and spontaneous Raman microscopy, proving the validity of the MCARS approach in combination with SVM.

  15. Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack

    NASA Astrophysics Data System (ADS)

    Nalegaev, S. S.; Petrov, N. V.

    Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.

  16. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern

    PubMed Central

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-01-01

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602

  17. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    PubMed

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  18. Stand-off detection of explosive particles by imaging Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Nordberg, Markus; Åkeson, Madeleine; Östmark, Henric; Carlsson, Torgny E.

    2011-06-01

    A multispectral imaging technique has been developed to detect and identify explosive particles, e.g. from a fingerprint, at stand-off distances using Raman spectroscopy. When handling IED's as well as other explosive devices, residues can easily be transferred via fingerprints onto other surfaces e.g. car handles, gear sticks and suite cases. By imaging the surface using multispectral imaging Raman technique the explosive particles can be identified and displayed using color-coding. The technique has been demonstrated by detecting fingerprints containing significant amounts of 2,4-dinitrotoulene (DNT), 2,4,6-trinitrotoulene (TNT) and ammonium nitrate at a distance of 12 m in less than 90 seconds (22 images × 4 seconds)1. For each measurement, a sequence of images, one image for each wave number, is recorded. The spectral data from each pixel is compared with reference spectra of the substances to be detected. The pixels are marked with different colors corresponding to the detected substances in the fingerprint. The system has now been further developed to become less complex and thereby less sensitive to the environment such as temperature fluctuations. The optical resolution has been improved to less than 70 μm measured at 546 nm wavelength. The total detection time is ranging from less then one minute to around five minutes depending on the size of the particles and how confident the identification should be. The results indicate a great potential for multi-spectral imaging Raman spectroscopy as a stand-off technique for detection of single explosive particles.

  19. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  20. Color identification testing device

    NASA Technical Reports Server (NTRS)

    Brawner, E. L.; Martin, R.; Pate, W.

    1970-01-01

    Testing device, which determines ability of a technician to identify color-coded electric wires, is superior to standard color blindness tests. It tests speed of wire selection, detects partial color blindness, allows rapid testing, and may be administered by a color blind person.

  1. Major Breeding Plumage Color Differences of Male Ruffs (Philomachus pugnax) Are Not Associated With Coding Sequence Variation in the MC1R Gene

    PubMed Central

    Küpper, Clemens; Burke, Terry; Lank, David B.

    2015-01-01

    Sequence variation in the melanocortin-1 receptor (MC1R) gene explains color morph variation in several species of birds and mammals. Ruffs (Philomachus pugnax) exhibit major dark/light color differences in melanin-based male breeding plumage which is closely associated with alternative reproductive behavior. A previous study identified a microsatellite marker (Ppu020) near the MC1R locus associated with the presence/absence of ornamental plumage. We investigated whether coding sequence variation in the MC1R gene explains major dark/light plumage color variation and/or the presence/absence of ornamental plumage in ruffs. Among 821bp of the MC1R coding region from 44 male ruffs we found 3 single nucleotide polymorphisms, representing 1 nonsynonymous and 2 synonymous amino acid substitutions. None were associated with major dark/light color differences or the presence/absence of ornamental plumage. At all amino acid sites known to be functionally important in other avian species with dark/light plumage color variation, ruffs were either monomorphic or the shared polymorphism did not coincide with color morph. Neither ornamental plumage color differences nor the presence/absence of ornamental plumage in ruffs are likely to be caused entirely by amino acid variation within the coding regions of the MC1R locus. Regulatory elements and structural variation at other loci may be involved in melanin expression and contribute to the extreme plumage polymorphism observed in this species. PMID:25534935

  2. Solar active region display system

    NASA Astrophysics Data System (ADS)

    Golightly, M.; Raben, V.; Weyland, M.

    2003-04-01

    The Solar Active Region Display System (SARDS) is a client-server application that automatically collects a wide range of solar data and displays it in a format easy for users to assimilate and interpret. Users can rapidly identify active regions of interest or concern from color-coded indicators that visually summarize each region's size, magnetic configuration, recent growth history, and recent flare and CME production. The active region information can be overlaid onto solar maps, multiple solar images, and solar difference images in orthographic, Mercator or cylindrical equidistant projections. Near real-time graphs display the GOES soft and hard x-ray flux, flare events, and daily F10.7 value as a function of time; color-coded indicators show current trends in soft x-ray flux, flare temperature, daily F10.7 flux, and x-ray flare occurrence. Through a separate window up to 4 real-time or static graphs can simultaneously display values of KP, AP, daily F10.7 flux, GOES soft and hard x-ray flux, GOES >10 and >100 MeV proton flux, and Thule neutron monitor count rate. Climatologic displays use color-valued cells to show F10.7 and AP values as a function of Carrington/Bartel's rotation sequences - this format allows users to detect recurrent patterns in solar and geomagnetic activity as well as variations in activity levels over multiple solar cycles. Users can customize many of the display and graph features; all displays can be printed or copied to the system's clipboard for "pasting" into other applications. The system obtains and stores space weather data and images from sources such as the NOAA Space Environment Center, NOAA National Geophysical Data Center, the joint ESA/NASA SOHO spacecraft, and the Kitt Peak National Solar Observatory, and can be extended to include other data series and image sources. Data and images retrieved from the system's database are converted to XML and transported from a central server using HTTP and SOAP protocols, allowing operation through network firewalls; data is compressed to enhance performance over limited bandwidth connections. All applications and services are written in the JAVA program language for platform independence. Several versions of SARDS have been in operational use by the NASA Space Radiation Analysis Group, NOAA Space Weather Operations, and U.S. Air Force Weather Agency since 1999.

  3. The photo-colorimetric space as a medium for the representation of spatial data

    NASA Technical Reports Server (NTRS)

    Kraiss, K. Friedrich; Widdel, Heino

    1989-01-01

    Spatial displays and instruments are usually used in the context of vehicle guidance, but it is hard to find applicable spatial formats in information retrieval and interaction systems. Human interaction with spatial data structures and the applicability of the CIE color space to improve dialogue transparency is discussed. A proposal is made to use the color space to code spatially represented data. The semantic distances of the categories of dialogue structures or, more general, of database structures, are determined empirically. Subsequently the distances are transformed and depicted into the color space. The concept is demonstrated for a car diagnosis system, where the category cooling system could, e.g., be coded in blue, the category ignition system in red. Hereby a correspondence between color and semantic distances is achieved. Subcategories can be coded as luminance differences within the color space.

  4. Color transfer between high-dynamic-range images

    NASA Astrophysics Data System (ADS)

    Hristova, Hristina; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi

    2015-09-01

    Color transfer methods alter the look of a source image with regards to a reference image. So far, the proposed color transfer methods have been limited to low-dynamic-range (LDR) images. Unlike LDR images, which are display-dependent, high-dynamic-range (HDR) images contain real physical values of the world luminance and are able to capture high luminance variations and finest details of real world scenes. Therefore, there exists a strong discrepancy between the two types of images. In this paper, we bridge the gap between the color transfer domain and the HDR imagery by introducing HDR extensions to LDR color transfer methods. We tackle the main issues of applying a color transfer between two HDR images. First, to address the nature of light and color distributions in the context of HDR imagery, we carry out modifications of traditional color spaces. Furthermore, we ensure high precision in the quantization of the dynamic range for histogram computations. As image clustering (based on light and colors) proved to be an important aspect of color transfer, we analyze it and adapt it to the HDR domain. Our framework has been applied to several state-of-the-art color transfer methods. Qualitative experiments have shown that results obtained with the proposed adaptation approach exhibit less artifacts and are visually more pleasing than results obtained when straightforwardly applying existing color transfer methods to HDR images.

  5. Landing Hazard Avoidance Display

    NASA Technical Reports Server (NTRS)

    Abernathy, Michael Franklin (Inventor); Hirsh, Robert L. (Inventor)

    2016-01-01

    Landing hazard avoidance displays can provide rapidly understood visual indications of where it is safe to land a vehicle and where it is unsafe to land a vehicle. Color coded maps can indicate zones in two dimensions relative to the vehicles position where it is safe to land. The map can be simply green (safe) and red (unsafe) areas with an indication of scale or can be a color coding of another map such as a surface map. The color coding can be determined in real time based on topological measurements and safety criteria to thereby adapt to dynamic, unknown, or partially known environments.

  6. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  7. White Rock in False Color

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image shows the wind eroded deposit in Pollack Crater called 'White Rock'. This image was collected during the Southern Fall Season.

    Image information: VIS instrument. Latitude -8, Longitude 25.2 East (334.8 West). 0 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  8. An interactive tool for gamut masking

    NASA Astrophysics Data System (ADS)

    Song, Ying; Lau, Cheryl; Süsstrunk, Sabine

    2014-02-01

    Artists often want to change the colors of an image to achieve a particular aesthetic goal. For example, they might limit colors to a warm or cool color scheme to create an image with a certain mood or feeling. Gamut masking is a technique that artists use to limit the set of colors they can paint with. They draw a mask over a color wheel and only use the hues within the mask. However, creating the color palette from the mask and applying the colors to the image requires skill. We propose an interactive tool for gamut masking that allows amateur artists to create an image with a desired mood or feeling. Our system extracts a 3D color gamut from the 2D user-drawn mask and maps the image to this gamut. The user can draw a different gamut mask or locally refine the image colors. Our voxel grid gamut representation allows us to represent gamuts of any shape, and our cluster-based image representation allows the user to change colors locally.

  9. True color scanning laser ophthalmoscopy and optical coherence tomography handheld probe

    PubMed Central

    LaRocca, Francesco; Nankivil, Derek; Farsiu, Sina; Izatt, Joseph A.

    2014-01-01

    Scanning laser ophthalmoscopes (SLOs) are able to achieve superior contrast and axial sectioning capability compared to fundus photography. However, SLOs typically use monochromatic illumination and are thus unable to extract color information of the retina. Previous color SLO imaging techniques utilized multiple lasers or narrow band sources for illumination, which allowed for multiple color but not “true color” imaging as done in fundus photography. We describe the first “true color” SLO, handheld color SLO, and combined color SLO integrated with a spectral domain optical coherence tomography (OCT) system. To achieve accurate color imaging, the SLO was calibrated with a color test target and utilized an achromatizing lens when imaging the retina to correct for the eye’s longitudinal chromatic aberration. Color SLO and OCT images from volunteers were then acquired simultaneously with a combined power under the ANSI limit. Images from this system were then compared with those from commercially available SLOs featuring multiple narrow-band color imaging. PMID:25401032

  10. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  11. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  12. Blending the art and science of color

    NASA Astrophysics Data System (ADS)

    Bourges, Jean

    1998-07-01

    An often neglected factor of cognition is the color, the natural and created hues that are an integral part of the image. This happens because the information is divided in two basically very different languages and they do not relate well to each other. One is for science that deals in facts and uses numbers to transmit the message. The other relates to intangibles and is used for the Arts. One speaks to reality and the other to emotions. Both are valid and correctly address the needs of their profession. Not very long ago the digital world was born. The computer came and put it all together in one little box, that probably sits on your desk and is accessible to most everyone else. Now there is the need to communicate and because the box only understands numbers, the artists have to shift and identify the colors by a simple numerical code. Conversely science can learn a lot from the arts, where for centuries painters have been studying the mysteries of creating light on canvas. While color needs a responsive set of eyes, it has always been here. Let us start now by using the same words to identify a particular color, its tint and shade, how the artists use it, the essential structure that lets one move from one color to another and the colorimetric measurements that make the exact color match a practical reality.

  13. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  14. Theoretical research on color indirect effects

    NASA Astrophysics Data System (ADS)

    Liu, T. C.; Liao, Changjun; Liu, Songhao

    1995-05-01

    Color indirect effects (CIE) means the physiological and psychological effects of color resulting from color vision. In this paper, we study CIE from the viewpoints of the integrated western and Chinese traditional medicine and the time quantum theory established by C. Y. Liu et al., respectively, and then put forward the color-automatic-nervous-subsystem model that could color excites parasympathetic subsystem and hot color excites sympathetic subsystem. Our theory is in agreement with modern color vision theory, and moreover, it leads to the resolution of the conflict between the color code theory and the time code theory oncolor vision. For the latitude phenomena on athlete stars number and the average lifespan, we also discuss the possibility of UV vision. The applications of our theory lead to our succeeding in explaining a number of physiological and psychological effects of color, in explaining the effects of age on color vision, and in explaining the Chinese chromophototherapy. We also discuss its application to neuroimmunology. This research provides the foundation of the clinical applications of chromophototherapy.

  15. Quality assessment of color images based on the measure of just noticeable color difference

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien; Hsu, Yun-Hsiang

    2014-01-01

    Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.

  16. Continuing Through Iani Chaos

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image continues the northward trend through the Iani Chaos region. Compare this image to Monday's and Tuesday's. This image was collected during the Southern Fall season.

    Image information: VIS instrument. Latitude -0.1 Longitude 342.6 East (17.4 West). 19 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  17. Aureum Chaos: Another View

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image is located in a different part of Aureum Chaos. Compare the surface textures with yesterday's image. This image was collected during the Southern Fall season.

    Image information: VIS instrument. Latitude -4.1, Longitude 333.9 East (26.1 West). 35 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    The quantum color coding scheme proposed by Korff and Kempe [e-print quant-ph/0405086] is easily extended so that the color coding quantum system is allowed to be entangled with an extra auxiliary quantum system. It is shown that in the extended scheme we need only {approx}2{radical}(N) quantum colors to order N objects in large N limit, whereas {approx}N/e quantum colors are required in the original nonextended version. The maximum success probability has asymptotics expressed by the Tracy-Widom distribution of the largest eigenvalue of a random Gaussian unitary ensemble (GUE) matrix.

  19. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    PubMed

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  20. Colored Chaos

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  1. Iani Chaos in False Color

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image of a portion of the Iani Chaos region was collected during the Southern Fall season.

    Image information: VIS instrument. Latitude -2.6 Longitude 342.4 East (17.6 West). 36 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  2. Polar Cap Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 12 May 2004 This daytime visible color image was collected on June 6, 2003 during the Southern Spring season near the South Polar Cap Edge.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -77.8, Longitude 195 East (165 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  3. Extended Colour--Some Methods and Applications.

    ERIC Educational Resources Information Center

    Dean, P. J.; Murkett, A. J.

    1985-01-01

    Describes how color graphics are built up on microcomputer displays and how a range of colors can be produced. Discusses the logic of color formation, noting that adding/subtracting color can be conveniently demonstrated. Color generating techniques in physics (resistor color coding and continuous spectrum production) are given with program…

  4. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  5. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  6. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-10

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  7. Assessment of Optical Coherence Tomography Color Probability Codes in Myopic Glaucoma Eyes After Applying a Myopic Normative Database.

    PubMed

    Seol, Bo Ram; Kim, Dong Myung; Park, Ki Ho; Jeoung, Jin Wook

    2017-11-01

    To evaluate the optical coherence tomography (OCT) color probability codes based on a myopic normative database and to investigate whether the implementation of the myopic normative database can improve the OCT diagnostic ability in myopic glaucoma. Comparative validity study. In this study, 305 eyes (154 myopic healthy eyes and 151 myopic glaucoma eyes) were included. A myopic normative database was obtained based on myopic healthy eyes. We evaluated the agreement between OCT color probability codes after applying the built-in and myopic normative databases, respectively. Another 120 eyes (60 myopic healthy eyes and 60 myopic glaucoma eyes) were included and the diagnostic performance of OCT color codes using a myopic normative database was investigated. The mean weighted kappa (Kw) coefficients for quadrant retinal nerve fiber layer (RNFL) thickness, clock-hour RNFL thickness, and ganglion cell-inner plexiform layer (GCIPL) thickness were 0.636, 0.627, and 0.564, respectively. The myopic normative database showed a higher specificity than did the built-in normative database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P < .001, P < .001, and P < .001, respectively). The receiver operating characteristic curve values increased when using the myopic normative database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P = .011, P = .004, P < .001, respectively). The diagnostic ability of OCT color codes for detection of myopic glaucoma significantly improved after application of the myopic normative database. The implementation of a myopic normative database is needed to allow more precise interpretation of OCT color probability codes when used in myopic eyes. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Color-Coded Prefilled Medication Syringes Decrease Time to Delivery and Dosing Error in Simulated Emergency Department Pediatric Resuscitations.

    PubMed

    Moreira, Maria E; Hernandez, Caleb; Stevens, Allen D; Jones, Seth; Sande, Margaret; Blumen, Jason R; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S

    2015-08-01

    The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  9. Color image enhancement of medical images using alpha-rooting and zonal alpha-rooting methods on 2D QDFT

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.

    2017-03-01

    2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.

  10. Mawrth Valles

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image of an old channel floor and surrounding highlands is located in the lower reach of Mawrth Valles. This image was collected during the Northern Spring season.

    Image information: VIS instrument. Latitude 25.7, Longitude 341.2 East (18.8 West). 35 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  11. a New Approach for the Semi-Automatic Texture Generation of the Buildings Facades, from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Oniga, E.

    2012-07-01

    The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue) values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  12. Conjugate gradient method for phase retrieval based on the Wirtinger derivative.

    PubMed

    Wei, Zhun; Chen, Wen; Qiu, Cheng-Wei; Chen, Xudong

    2017-05-01

    A conjugate gradient Wirtinger flow (CG-WF) algorithm for phase retrieval is proposed in this paper. It is shown that, compared with recently reported Wirtinger flow and its modified methods, the proposed CG-WF algorithm is able to dramatically accelerate the convergence rate while keeping the dominant computational cost of each iteration unchanged. We numerically illustrate the effectiveness of our method in recovering 1D Gaussian signals and 2D natural color images under both Gaussian and coded diffraction pattern models.

  13. Enzymatic Fuel Cells: Integrating Flow-Through Anode and Air-Breathing Cathode into a Membrane-Less Biofuel Cell Design (Postprint)

    DTIC Science & Technology

    2011-06-01

    AFRL-RX-TY-TP-2011-0081 ENZYMATIC FUEL CELLS: INTEGRATING FLOW- THROUGH ANODE AND AIR-BREATHING CATHODE INTO A MEMBRANE-LESS BIOFUEL CELL...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 01-JUN-2011 Journal Article (POSTPRINT) 01-JAN-2010 -- 31-JAN-2011 Enzymatic Fuel Cells...unlimited. Ref Public Affairs Case # 88ABW-2011-2228, 14 Apr 11. Document contains color images. One of the key goals of enzymatic biofuel cells

  14. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  15. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  16. Topographic View of Ceres Mountain

    NASA Image and Video Library

    2015-09-30

    This view, made using images taken by NASA's Dawn spacecraft, features a tall conical mountain on Ceres. Elevations span a range of about 5 miles (8 kilometers) from the lowest places in this region to the highest terrains. Blue represents the lowest elevation, and brown is the highest. The white streaks seen running down the side of the mountain are especially bright parts of the surface. The image was generated using two components: images of the surface taken during Dawn's High Altitude Mapping Orbit (HAMO) phase, where it viewed the surface at a resolution of about 450 feet (140 meters) per pixel, and a shape model generated using images taken at varying sun and viewing angles during Dawn's lower-resolution Survey phase. The image of the region is color-coded according to elevation, and then draped over the shape model to give this view. http://photojournal.jpl.nasa.gov/catalog/PIA19976

  17. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  18. SYNMAG PHOTOMETRY: A FAST TOOL FOR CATALOG-LEVEL MATCHED COLORS OF EXTENDED SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bundy, Kevin; Yasuda, Naoki; Hogg, David W.

    2012-12-01

    Obtaining reliable, matched photometry for galaxies imaged by different observatories represents a key challenge in the era of wide-field surveys spanning more than several hundred square degrees. Methods such as flux fitting, profile fitting, and PSF homogenization followed by matched-aperture photometry are all computationally expensive. We present an alternative solution called 'synthetic aperture photometry' that exploits galaxy profile fits in one band to efficiently model the observed, point-spread-function-convolved light profile in other bands and predict the flux in arbitrarily sized apertures. Because aperture magnitudes are the most widely tabulated flux measurements in survey catalogs, producing synthetic aperture magnitudes (SYNMAGs) enablesmore » very fast matched photometry at the catalog level, without reprocessing imaging data. We make our code public and apply it to obtain matched photometry between Sloan Digital Sky Survey ugriz and UKIDSS YJHK imaging, recovering red-sequence colors and photometric redshifts with a scatter and accuracy as good as if not better than FWHM-homogenized photometry from the GAMA Survey. Finally, we list some specific measurements that upcoming surveys could make available to facilitate and ease the use of SYNMAGs.« less

  19. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  20. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  1. The effect of a redundant color code on an overlearned identification task

    NASA Technical Reports Server (NTRS)

    Obrien, Kevin

    1992-01-01

    The possibility of finding redundancy gains with overlearned tasks was examined using a paradigm varying familiarity with the stimulus set. Redundant coding in a multidimensional stimulus was demonstrated to result in increased identification accuracy and decreased latency of identification when compared to stimuli varying on only one dimension. The advantages attributable to redundant coding are referred to as redundancy gain and were found for a variety of stimulus dimension combinations, including the use of hue or color as one of the dimensions. Factors that have affected redundancy gain include the discriminability of the levels of one stimulus dimension and the level of stimulus-to-response association. The results demonstrated that response time is in part a function of familiarity, but no effect of redundant color coding was demonstrated. Implications of research on coding in identification tasks for display design are discussed.

  2. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  3. Optimal chroma-like channel design for passive color image splicing detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin

    2012-12-01

    Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.

  4. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  5. Auream Chaos

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image was collected during Southern Fall and shows part of the Aureum Chaos.

    Image information: VIS instrument. Latitude -3.6, Longitude 332.9 East (27.1 West). 35 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  6. Northern Polar Cap

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 13 May 2004 This nighttime visible color image was collected on November 26, 2002 during the Northern Summer season near the North Polar Cap Edge.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude 80, Longitude 43.2 East (316.8 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  7. Autographic theme extraction

    USGS Publications Warehouse

    Edson, D.; Colvocoresses, Alden P.

    1973-01-01

    Remote-sensor images, including aerial and space photographs, are generally recorded on film, where the differences in density create the image of the scene. With panchromatic and multiband systems the density differences are recorded in shades of gray. On color or color infrared film, with the emulsion containing dyes sensitive to different wavelengths, a color image is created by a combination of color densities. The colors, however, can be separated by filtering or other techniques, and the color image reduced to monochromatic images in which each of the separated bands is recorded as a function of the gray scale.

  8. Neural classification of the selected family of butterflies

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Boniecki, P.; Piekarska-Boniecka, H.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.

    2017-07-01

    There have been noticed growing explorers' interest in drawing conclusions based on information of data coded in a graphic form. The neuronal identification of pictorial data, with special emphasis on both quantitative and qualitative analysis, is more frequently utilized to gain and deepen the empirical data knowledge. Extraction and then classification of selected picture features, such as color or surface structure, enables one to create computer tools in order to identify these objects presented as, for example, digital pictures. The work presents original computer system "Processing the image v.1.0" designed to digitalize pictures on the basis of color criterion. The system has been applied to generate a reference learning file for generating the Artificial Neural Network (ANN) to identify selected kinds of butterflies from the Papilionidae family.

  9. Analysis of the Tanana River Basin using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Morrissey, L. A.; Ambrosia, V. G.; Carson-Henry, C.

    1981-01-01

    Digital image classification techniques were used to classify land cover/resource information in the Tanana River Basin of Alaska. Portions of four scenes of LANDSAT digital data were analyzed using computer systems at Ames Research Center in an unsupervised approach to derive cluster statistics. The spectral classes were identified using the IDIMS display and color infrared photography. Classification errors were corrected using stratification procedures. The classification scheme resulted in the following eleven categories; sedimented/shallow water, clear/deep water, coniferous forest, mixed forest, deciduous forest, shrub and grass, bog, alpine tundra, barrens, snow and ice, and cultural features. Color coded maps and acreage summaries of the major land cover categories were generated for selected USGS quadrangles (1:250,000) which lie within the drainage basin. The project was completed within six months.

  10. Three-dimensional quick response code based on inkjet printing of upconversion fluorescent nanoparticles for drug anti-counterfeiting

    NASA Astrophysics Data System (ADS)

    You, Minli; Lin, Min; Wang, Shurui; Wang, Xuemin; Zhang, Ge; Hong, Yuan; Dong, Yuqing; Jin, Guorui; Xu, Feng

    2016-05-01

    Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting.Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting. Electronic supplementary information (ESI) available: Calculating details of UCNP content per 3D QR code and decoding process of the 3D QR code. See DOI: 10.1039/c6nr01353h

  11. Brain MR image segmentation using NAMS in pseudo-color.

    PubMed

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  12. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  13. Grand average ERP-image plotting and statistics: A method for comparing variability in event-related single-trial EEG activities across subjects and conditions

    PubMed Central

    Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott

    2014-01-01

    With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029

  14. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  15. Effects of Gender Color-Coding on Toddlers' Gender-Typical Toy Play.

    PubMed

    Wong, Wang I; Hines, Melissa

    2015-07-01

    Gender color-coding of children's toys may make certain toys more appealing or less appealing to a given gender. We observed toddlers playing with two gender-typical toys (a train, a doll), once in gender-typical colors and once in gender-atypical colors. Assessments occurred twice, at 20-40 months of age and at 26-47 months of age. A Sex × Time × Toy × Color ANOVA showed expected interactions between Sex and Toy and Sex and Color. Boys played more with the train than girls did and girls played more with the doll and with pink toys than boys did. The Sex × Toy × Color interaction was not significant, but, at both time points, boys and girls combined played more with the gender-atypical toy when its color was typical for their sex than when it was not. This effect appeared to be caused largely by boys' preference for, or avoidance of, the doll and by the use of pink. Also, at both time points, gender differences in toy preferences were larger in the gender-typical than in the gender-atypical color condition. At Time 2, these gender differences were present only in the gender-typical color condition. Overall, the results suggest that, once acquired, gender-typical color preferences begin to influence toy preferences, especially those for gender-atypical toys and particularly in boys. They thus could enlarge differences between boys' and girls' toy preferences. Because boys' and girls' toys elicit different activities, removing the gender color-coding of toys could encourage more equal learning opportunities.

  16. A study of glasses-type color CGH using a color filter considering reduction of blurring

    NASA Astrophysics Data System (ADS)

    Iwami, Saki; Sakamoto, Yuji

    2009-02-01

    We have developed a glasses-type color computer generated hologram (CGH) by using a color filter. The proposed glasses consist of two "lenses" made of overlapping holograms and color filters. The holograms, which are calculated to reconstruct images in each primary color, are divided to small areas, which we called cells, and superimposed on one hologram. In the same way, colors of the filter correspond to the hologram cells. We can configure it very simply without a complex optical system, and the configuration yields a small and light weight system suitable for glasses. When the cell is small enough, the colors are mixed and reconstructed color images are observed. In addition, color expression of reconstruction images improves, too. However, using small cells blurrs reconstructed images because of the following reasons: (1) interference between cells because of the correlation with the cells, and (2) reduction of resolution caused by the size of the cell hologram. We are investigating in order to make a hologram that has high resolution reconstructed color images without ghost images. In this paper, we discuss (1) the details of the proposed glasses-type color CGH, (2) appropriate cell size for an eye system, (3) effects of cell shape on the reconstructed images, and (4) a new method to reduce the blurring of the images.

  17. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix

    2017-07-01

    We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.

  18. A robust color image fusion for low light level and infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang

    2016-09-01

    The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.

  19. Luminance contours can gate afterimage colors and "real" colors.

    PubMed

    Anstis, Stuart; Vergeer, Mark; Van Lier, Rob

    2012-09-06

    It has long been known that colored images may elicit afterimages in complementary colors. We have already shown (Van Lier, Vergeer, & Anstis, 2009) that one and the same adapting image may result in different afterimage colors, depending on the test contours presented after the colored image. The color of the afterimage depends on two adapting colors, those both inside and outside the test. Here, we further explore this phenomenon and show that the color-contour interactions shown for afterimage colors also occur for "real" colors. We argue that similar mechanisms apply for both types of stimulation.

  20. Application of passive imaging polarimetry in the discrimination and detection of different color targets of identical shapes using color-blind imaging sensors

    NASA Astrophysics Data System (ADS)

    El-Saba, A. M.; Alam, M. S.; Surpanani, A.

    2006-05-01

    Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.

  1. Landscape of Former Lakes and Streams on Northern Mars

    NASA Image and Video Library

    2016-09-15

    Valleys younger than better-known ancient valley networks on Mars are evident on the landscape in the northern Arabia Terra region of Mars, particularly in the area mapped here with color-coded topographical information overlaid onto a photo mosaic. The area includes a basin informally named "Heart Lake" at upper left (northwest). Data from the Mars Orbiter Laser Altimeter (MOLA) on NASA's Mars Global Surveyor orbiter are coded here as white and purple for lower elevations, yellow for higher elevation. The elevation information is combined with a mosaic of images from the Thermal Emission Imaging System (THEMIS) camera on NASA's Mars Odyssey orbiter, covering an area about 120 miles (about 190 kilometers) wide. The mapped area is centered near 35.91 degrees north latitude, 1 degree east longitude on Mars. These lakes and streams held water several hundred million years after better-known ancient lake environments on Mars, according to 2016 findings. http://photojournal.jpl.nasa.gov/catalog/PIA20838

  2. Lightness modification of color image for protanopia and deuteranopia

    NASA Astrophysics Data System (ADS)

    Tanaka, Go; Suetake, Noriaki; Uchino, Eiji

    2010-01-01

    In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.

  3. Applications of LANDSAT data to the integrated economic development of Mindoro, Phillipines

    NASA Technical Reports Server (NTRS)

    Wagner, T. W.; Fernandez, J. C.

    1977-01-01

    LANDSAT data is seen as providing essential up-to-date resource information for the planning process. LANDSAT data of Mindoro Island in the Philippines was processed to provide thematic maps showing patterns of agriculture, forest cover, terrain, wetlands and water turbidity. A hybrid approach using both supervised and unsupervised classification techniques resulted in 30 different scene classes which were subsequently color-coded and mapped at a scale of 1:250,000. In addition, intensive image analysis is being carried out in evaluating the images. The images, maps, and aerial statistics are being used to provide data to seven technical departments in planning the economic development of Mindoro. Multispectral aircraft imagery was collected to compliment the application of LANDSAT data and validate the classification results.

  4. TiConverter: A training image converting tool for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Fadlelmula F., Mohamed M.; Killough, John; Fraim, Michael

    2016-11-01

    TiConverter is a tool developed to ease the application of multiple-point geostatistics whether by the open source Stanford Geostatistical Modeling Software (SGeMS) or other available commercial software. TiConverter has a user-friendly interface and it allows the conversion of 2D training images into numerical representations in four different file formats without the need for additional code writing. These are the ASCII (.txt), the geostatistical software library (GSLIB) (.txt), the Isatis (.dat), and the VTK formats. It performs the conversion based on the RGB color system. In addition, TiConverter offers several useful tools including image resizing, smoothing, and segmenting tools. The purpose of this study is to introduce the TiConverter, and to demonstrate its application and advantages with several examples from the literature.

  5. SRTM Colored Height and Shaded Relief: Corral de Piedra, Argentina

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Volcanism and erosion are prominently seen in this view of the eastern flank of the Andes Mountains taken by Shuttle Radar Topography Mission (SRTM). The area is southeast of San Martin de Los Andes, Argentina. Eroded peaks up to 2,210-meter-high (7,260-foot) are seen on the west (left), but much of the scene consists of lava plateaus that slope gently eastward. These lava flows were most likely derived from volcanic sources in the high mountains. However, younger and more localized volcanic activity is evident in the topographic data as a cone surrounding oval-shaped flow near the center of the scene.

    The plateaus are extensively eroded by the Rio Limay (bottom of the image) and the Rio Collon Cura and its tributaries (upper half). The larger stream channels have reached a stable level and are now cutting broad valleys. Few terraces between the levels of the high plateaus and lower valleys (bottom center and upper right of the volcanic cone) indicate that stream erosion had once temporarily reached a higher stable level before eroding down to its current level. In general, depositional surfaces like lava flows are progressively younger with increasing elevation, while erosional surfaces are progressively younger with decreasing elevation.

    Two visualization methods were combined to produce this image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the north-south direction. Northern slopes appear bright and southern slopes appear dark, as would be the case at noon at this latitude in the southern hemisphere. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow, red and magenta to white at the highest elevations.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.

    Size: 57.6 x 40.5 kilometers ( 35.7 x 25.1 miles) Location: 40.4 deg. South lat., 70.8 deg. West lon. Orientation: North toward the top Image Data: Shaded and colored SRTM elevation model Date Acquired: February 2000

  6. Quantifying nonhomogeneous colors in agricultural materials part I: method development.

    PubMed

    Balaban, M O

    2008-11-01

    Measuring the color of food and agricultural materials using machine vision (MV) has advantages not available by other measurement methods such as subjective tests or use of color meters. The perception of consumers may be affected by the nonuniformity of colors. For relatively uniform colors, average color values similar to those given by color meters can be obtained by MV. For nonuniform colors, various image analysis methods (color blocks, contours, and "color change index"[CCI]) can be applied to images obtained by MV. The degree of nonuniformity can be quantified, depending on the level of detail desired. In this article, the development of the CCI concept is presented. For images with a wide range of hue values, the color blocks method quantifies well the nonhomogeneity of colors. For images with a narrow hue range, the CCI method is a better indicator of color nonhomogeneity.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimpe, T; Marchessoux, C; Rostang, J

    Purpose: Use of color images in medical imaging has increased significantly the last few years. As of today there is no agreed standard on how color information needs to be visualized on medical color displays, resulting into large variability of color appearance and it making consistency and quality assurance a challenge. This paper presents a proposal for an extension of DICOM GSDF towards color. Methods: Visualization needs for several color modalities (multimodality imaging, nuclear medicine, digital pathology, quantitative imaging applications…) have been studied. On this basis a proposal was made for desired color behavior of color medical display systems andmore » its behavior and effect on color medical images was analyzed. Results: Several medical color modalities could benefit from perceptually linear color visualization for similar reasons as why GSDF was put in place for greyscale medical images. An extension of the GSDF (Greyscale Standard Display Function) to color is proposed: CSDF (color standard display function). CSDF is based on deltaE2000 and offers a perceptually linear color behavior. CSDF uses GSDF as its neutral grey behavior. A comparison between sRGB/GSDF and CSDF confirms that CSDF significantly improves perceptual color linearity. Furthermore, results also indicate that because of the improved perceptual linearity, CSDF has the potential to increase perceived contrast of clinically relevant color features. Conclusion: There is a need for an extension of GSDF towards color visualization in order to guarantee consistency and quality. A first proposal (CSDF) for such extension has been made. Behavior of a CSDF calibrated display has been characterized and compared with sRGB/GSDF behavior. First results indicate that CSDF could have a positive influence on perceived contrast of clinically relevant color features and could offer benefits for quantitative imaging applications. Authors are employees of Barco Healthcare.« less

  8. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  9. Color transfer method preserving perceived lightness

    NASA Astrophysics Data System (ADS)

    Ueda, Chiaki; Azetsu, Tadahiro; Suetake, Noriaki; Uchino, Eiji

    2016-06-01

    Color transfer originally proposed by Reinhard et al. is a method to change the color appearance of an input image by using the color information of a reference image. The purpose of this study is to modify color transfer so that it works well even when the scenes of the input and reference images are not similar. Concretely, a color transfer method with lightness correction and color gamut adjustment is proposed. The lightness correction is applied to preserve the perceived lightness which is explained by the Helmholtz-Kohlrausch (H-K) effect. This effect is the phenomenon that vivid colors are perceived as brighter than dull colors with the same lightness. Hence, when the chroma is changed by image processing, the perceived lightness is also changed even if the physical lightness is preserved after the image processing. In the proposed method, by considering the H-K effect, color transfer that preserves the perceived lightness after processing is realized. Furthermore, color gamut adjustment is introduced to address the color gamut problem, which is caused by color space conversion. The effectiveness of the proposed method is verified by performing some experiments.

  10. Keep It Simple. Teaching Tips for Special Olympic Athletes.

    ERIC Educational Resources Information Center

    Johnston, Judith E.; And Others

    1996-01-01

    Physical educators can help Special Olympics athletes learn cross-lateral delivery techniques for bowling or throwing softballs by color coding the throwing arm and opposing foot. The article explains color coding, presenting teaching tips for both sports. A series of workshops on modifying exercise principles for individuals with physical…

  11. Use of color-coded sleeve shutters accelerates oscillograph channel selection

    NASA Technical Reports Server (NTRS)

    Bouchlas, T.; Bowden, F. W.

    1967-01-01

    Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.

  12. Color Coding of Circuit Quantities in Introductory Circuit Analysis Instruction

    ERIC Educational Resources Information Center

    Reisslein, Jana; Johnson, Amy M.; Reisslein, Martin

    2015-01-01

    Learning the analysis of electrical circuits represented by circuit diagrams is often challenging for novice students. An open research question in electrical circuit analysis instruction is whether color coding of the mathematical symbols (variables) that denote electrical quantities can improve circuit analysis learning. The present study…

  13. An evaluation of acquired data as a tool for management of wildlife habitat in Alaska

    NASA Technical Reports Server (NTRS)

    Vantries, B. J. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Density sliced and digitized imagery of the Kuskokwin/Yukon Delta were analyzed. Color coded images of the isodensity displays were compared with existing vegetation maps of the ERTS-1 frames for the Yukon/Kuskokwin area. A high degree of positive correlation was found to exist between the ERTS-1 image and the conventionally prepared maps. Hydrologic phenomena were also analyzed. Digitization on South Dakota State Remote Sensing Center's SADE system provide some discrimination among several large lakes in the subject area. However, interpretation must await ground observations and depth measurements. An attempt will be made to classify large water bodies by depth classes.

  14. The Constancy of Colored After-Images

    PubMed Central

    Zeki, Semir; Cheadle, Samuel; Pepper, Joshua; Mylonas, Dimitris

    2017-01-01

    We undertook psychophysical experiments to determine whether the color of the after-image produced by viewing a colored patch which is part of a complex multi-colored scene depends on the wavelength-energy composition of the light reflected from that patch. Our results show that it does not. The after-image, just like the color itself, depends on the ratio of light of different wavebands reflected from it and its surrounds. Hence, traditional accounts of after-images as being the result of retinal adaptation or the perceptual result of physiological opponency, are inadequate. We propose instead that the color of after-images is generated after colors themselves are generated in the visual brain. PMID:28539878

  15. Error threshold for color codes and random three-body Ising models.

    PubMed

    Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A

    2009-08-28

    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.

  16. RATIO_TOOL - SOFTWARE FOR COMPUTING IMAGE RATIOS

    NASA Technical Reports Server (NTRS)

    Yates, G. L.

    1994-01-01

    Geological studies analyze spectral data in order to gain information on surface materials. RATIO_TOOL is an interactive program for viewing and analyzing large multispectral image data sets that have been created by an imaging spectrometer. While the standard approach to classification of multispectral data is to match the spectrum for each input pixel against a library of known mineral spectra, RATIO_TOOL uses ratios of spectral bands in order to spot significant areas of interest within a multispectral image. Each image band can be viewed iteratively, or a selected image band of the data set can be requested and displayed. When the image ratios are computed, the result is displayed as a gray scale image. At this point a histogram option helps in viewing the distribution of values. A thresholding option can then be used to segment the ratio image result into two to four classes. The segmented image is then color coded to indicate threshold classes and displayed alongside the gray scale image. RATIO_TOOL is written in C language for Sun series computers running SunOS 4.0 and later. It requires the XView toolkit and the OpenWindows window manager (version 2.0 or 3.0). The XView toolkit is distributed with Open Windows. A color monitor is also required. The standard distribution medium for RATIO_TOOL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation is included on the program media. RATIO_TOOL was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Sun, SunOS, and OpenWindows are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  17. Estimation of color modification in digital images by CFA pattern change.

    PubMed

    Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-03-10

    Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Color-Blindness Study: Color Discrimination on the TICCIT System.

    ERIC Educational Resources Information Center

    Asay, Calvin S.; Schneider, Edward W.

    The question studied whether the specific seven TICCIT system colors used within color coding schemes can be a source of confusion, or not seen at all, by the color-blind segment of target populations. Subjects were 11 color-blind and three normally sighted students at Brigham Young University. After a preliminary training exercise to acquaint the…

  19. Recognition memory for colored and black-and-white scenes in normal and color deficient observers (dichromats).

    PubMed

    Brédart, Serge; Cornet, Alyssa; Rakic, Jean-Marie

    2014-01-01

    Color deficient (dichromat) and normal observers' recognition memory for colored and black-and-white natural scenes was evaluated through several parameters: the rate of recognition, discrimination (A'), response bias (B"D), response confidence, and the proportion of conscious recollections (Remember responses) among hits. At the encoding phase, 36 images of natural scenes were each presented for 1 sec. Half of the images were shown in color and half in black-and-white. At the recognition phase, these 36 pictures were intermixed with 36 new images. The participants' task was to indicate whether an image had been presented or not at the encoding phase, to rate their level of confidence in his her/his response, and in the case of a positive response, to classify the response as a Remember, a Know or a Guess response. Results indicated that accuracy, response discrimination, response bias and confidence ratings were higher for colored than for black-and-white images; this advantage for colored images was similar in both groups of participants. Rates of Remember responses were not higher for colored images than for black-and-white ones, whatever the group. However, interestingly, Remember responses were significantly more often based on color information for colored than for black-and-white images in normal observers only, not in dichromats.

  20. South Polar Cap

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 28 May 2004 This image was collected February 29, 2004 during the end of southern summer season. The local time at the location of the image was about 2 pm. The image shows an area in the South Polar region.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -84.7, Longitude 9.3 East (350.7 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  1. Detection of intracardiac shunt flow in atrial septal defect using a real-time two-dimensional color-coded Doppler flow imaging system and comparison with contrast two-dimensional echocardiography.

    PubMed

    Suzuki, Y; Kambara, H; Kadota, K; Tamaki, S; Yamazato, A; Nohara, R; Osakada, G; Kawai, C

    1985-08-01

    To evaluate the noninvasive detection of shunt flow using a newly developed real-time 2-dimensional color-coded Doppler flow imaging system (D-2DE), 20 patients were examined, including 10 with secundum atrial septal defect (ASD) and 10 control subjects. These results were compared with contrast 2-dimensional echocardiography (C-2DE). Doppler 2DE displayed the blood flow toward the transducer as red and the blood flow away from the transducer as blue in 8 shades, each shade adding green according to the degree of variance in Doppler frequency. In the patients with ASD, D-2DE clearly visualized left-to-right shunt flow in 7 of 10 patients. In 5 of these 7 patients, C-2DE showed a negative contrast effect in the same area of the right atrium. Thus, D-2DE increased the sensitivity over C-2DE for detecting left-to-right shunt flow (from 50% to 70%). However, the specificity was slightly less in D-2DE (90%) than C-2DE (100%). Doppler 2DE could not visualize right-to-left shunt flow in all patients with ASD, though C-2DE showed a positive contrast effect in the left-sided heart in 9 of 10 patients with ASD. Thus, D-2DE is clinically useful for detecting left-to-right shunt flow in patients with ASD.

  2. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  3. Visual wetness perception based on image color statistics.

    PubMed

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  4. Color Sparse Representations for Image Processing: Review, Models, and Prospects.

    PubMed

    Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I

    2015-11-01

    Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.

  5. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  6. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  7. The Artist, the Color Copier, and Digital Imaging.

    ERIC Educational Resources Information Center

    Witte, Mary Stieglitz

    The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…

  8. Chemistry of the Konica Dry Color System

    NASA Astrophysics Data System (ADS)

    Suda, Yoshihiko; Ohbayashi, Keiji; Onodera, Kaoru

    1991-08-01

    While silver halide photosensitive materials offer superiority in image quality -- both in color and black-and-white -- they require chemical solutions for processing, and this can be a drawback. To overcome this, researchers turned to the thermal development of silver halide photographic materials, and met their first success with black-and-white images. Later, with the development of the Konica Dry Color System, color images were finally obtained from a completely dry thermal development system, without the use of water or chemical solutions. The dry color system is characterized by a novel chromogenic color image-forming technology and comprises four processes. (1) With the application of heat, a color developer precursor (CDP) decomposes to generate a p-phenylenediamine color developer (CD). (2) The CD then develops silver salts. (3) Oxidized CD then reacts with couplers to generate color image dyes. (4) Finally, the dyes diffuse from the system's photosensitive sheet to its image-receiving sheet. The authors have analyzed the kinetics of each of the system's four processes. In this paper, they report the kinetics of the system's first process, color developer (CD) generation.

  9. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  10. A review of color blindness for microscopists: guidelines and tools for accommodating and coping with color vision deficiency.

    PubMed

    Keene, Douglas R

    2015-04-01

    "Color blindness" is a variable trait, including individuals with just slight color vision deficiency to those rare individuals with a complete lack of color perception. Approximately 75% of those with color impairment are green diminished; most of those remaining are red diminished. Red-Green color impairment is sex linked with the vast majority being male. The deficiency results in reds and greens being perceived as shades of yellow; therefore red-green images presented to the public will not illustrate regions of distinction to these individuals. Tools are available to authors wishing to accommodate those with color vision deficiency; most notable are components in FIJI (an extension of ImageJ) and Adobe Photoshop. Using these tools, hues of magenta may be substituted for red in red-green images resulting in striking definition for both the color sighted and color impaired. Web-based tools may be used (importantly) by color challenged individuals to convert red-green images archived in web-accessible journal articles into two-color images, which they may then discern.

  11. A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2015-07-01

    In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.

  12. Graphics-Printing Program For The HP Paintjet Printer

    NASA Technical Reports Server (NTRS)

    Atkins, Victor R.

    1993-01-01

    IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.

  13. Earth and Moon as viewed from Mars

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-368, 22 May 2003

    [figure removed for brevity, see original site] Globe diagram illustrates the Earth's orientation as viewed from Mars (North and South America were in view).

    Earth/Moon: This is the first image of Earth ever taken from another planet that actually shows our home as a planetary disk. Because Earth and the Moon are closer to the Sun than Mars, they exhibit phases, just as the Moon, Venus, and Mercury do when viewed from Earth. As seen from Mars by MGS on 8 May 2003 at 13:00 GMT (6:00 AM PDT), Earth and the Moon appeared in the evening sky. The MOC Earth/Moon image has been specially processed to allow both Earth (with an apparent magnitude of -2.5) and the much darker Moon (with an apparent magnitude of +0.9) to be visible together. The bright area at the top of the image of Earth is cloud cover over central and eastern North America. Below that, a darker area includes Central America and the Gulf of Mexico. The bright feature near the center-right of the crescent Earth consists of clouds over northern South America. The image also shows the Earth-facing hemisphere of the Moon, since the Moon was on the far side of Earth as viewed from Mars. The slightly lighter tone of the lower portion of the image of the Moon results from the large and conspicuous ray system associated with the crater Tycho.

    A note about the coloring process: The MGS MOC high resolution camera only takes grayscale (black-and-white) images. To 'colorize' the image, a Mariner 10 Earth/Moon image taken in 1973 was used to color the MOC Earth and Moon picture. The procedure used was as follows: the Mariner 10 image was converted from 24-bit color to 8-bit color using a JPEG to GIF conversion program. The 8-bit color image was converted to 8-bit grayscale and an associated lookup table mapping each gray value of the image to a red-green-blue color triplet (RGB). Each color triplet was root-sum-squared (RSS), and sorted in increasing RSS value. These sorted lists were brightness-to-color maps for the images. Each brightness-to-color map was then used to convert the 8-bit grayscale MOC image to an 8-bit color image. This 8-bit color image was then converted to a 24-bit color image. The color image was edited to return the background to black.

  14. Color Comprehension and Color Categories among Blind Students: A Multi-Sensory Approach in Implementing Concrete Language to Include All Students in Advanced Writing Classes

    ERIC Educational Resources Information Center

    Antarasena, Salinee

    2009-01-01

    This study investigates teaching methods regarding color comprehension and color categorization among blind students, as compared to their non-blind peers and whether they understand and represent the same color comprehension and color categories. Then after digit codes for color comprehension teaching and assistive technology for the blind had…

  15. A multispectral photon-counting double random phase encoding scheme for image authentication.

    PubMed

    Yi, Faliu; Moon, Inkyu; Lee, Yeon H

    2014-05-20

    In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.

  16. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  17. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  18. Solar physics applications of computer graphics and image processing

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  19. HDRK-Man: a whole-body voxel model based on high-resolution color slice images of a Korean adult male cadaver

    NASA Astrophysics Data System (ADS)

    Kim, Chan Hyeong; Hyoun Choi, Sang; Jeong, Jong Hwi; Lee, Choonsik; Chung, Min Suk

    2008-08-01

    A Korean voxel model, named 'High-Definition Reference Korean-Man (HDRK-Man)', was constructed using high-resolution color photographic images that were obtained by serially sectioning the cadaver of a 33-year-old Korean adult male. The body height and weight, the skeletal mass and the dimensions of the individual organs and tissues were adjusted to the reference Korean data. The resulting model was then implemented into a Monte Carlo particle transport code, MCNPX, to calculate the dose conversion coefficients for the internal organs and tissues. The calculated values, overall, were reasonable in comparison with the values from other adult voxel models. HDRK-Man showed higher dose conversion coefficients than other models, due to the facts that HDRK-Man has a smaller torso and that the arms of HDRK-Man are shifted backward. The developed model is believed to adequately represent average Korean radiation workers and thus can be used for more accurate calculation of dose conversion coefficients for Korean radiation workers in the future.

  20. Melas Chasma - False Color

    NASA Image and Video Library

    2015-10-08

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the floor of Melas Chasma. The dark blue region in this false color image is sand dunes. Orbit Number: 12061 Latitude: -12.2215 Longitude: 289.105 Instrument: VIS Captured: 2004-09-02 10:11 http://photojournal.jpl.nasa.gov/catalog/PIA19793

  1. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data from the camera were being transmitted in real time to the Deep Space Network antennas in Goldstone, California.

  2. A Striking Perspective

    NASA Image and Video Library

    2015-04-16

    This image from NASA MESSENGER spacecraft provides a perspective view of the center portion of Carnegie Rupes, a large tectonic landform, which cuts through Duccio crater. The image shows the terrain (variations in topography) as measured by the MLA instrument and surface mapped by the MDIS instrument. The image was color-coded to highlight the variations in topography (red = high standing terrain, blue = low lying terrain). Tectonic landforms such as Carnegie Rupes form on Mercury as a response to interior planetary cooling, resulting in the overall shrinking of the planet. To make this graphic, 48 individual MDIS images were used as part of the mosaic. Instruments: Mercury Dual Imaging System (MDIS) and Mercury Laser Altimeter (MLA) Latitude: 57.1° Longitude: 304.0° E Scale: Duccio crater has a diameter of roughly 105 kilometers (65 miles) Height: Portions of Carnegie Rupes are nearly 2 kilometers (1.2 miles) in height Orientation: North is roughly to the left of the image http://photojournal.jpl.nasa.gov/catalog/PIA19422

  3. Shaded Relief with Height as Color, Yucatan Peninsula, Mexico

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This shaded relief image of Mexico's Yucatan Peninsula show a subtle, but unmistakable, indication of the Chicxulub impact crater. Most scientists now agree that this impact was the cause of the Cretatious-Tertiary Extinction, the event 65 million years ago that marked the sudden extinction of the dinosaurs as well as the majority of life then on Earth.

    Most of the peninsula is visible here, along with the island of Cozumel off the east coast. The Yucatan is a plateau composed mostly of limestone and is an area of very low relief with elevations varying by less than a few hundred meters (about 500 feet.) In this computer-enhanced image the topography has been greatly exaggerated to highlight a semicircular trough, the darker green arcing line at the upper left corner of the peninsula. This trough is only about 3 to 5 meters (10 to 15 feet) deep and is about 5 km. wide (3 miles), so subtle that if you walked across it you probably would not notice it, and is a surface expression of the crater's outer boundary. Scientists believe the impact, which was centered just off the coast in the Caribbean, altered the subsurface rocks such that the overlying limestone sediments, which formed later and erode very easily, would preferentially erode on the vicinity of the crater rim. This formed the trough as well as numerous sinkholes (called cenotes) which are visible as small circular depressions.

    Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwestern slopes appear bright and southeastern slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.

    For a smaller, annotated version of this image, please select Figure 1, below: [figure removed for brevity, see original site] (Large image: 1.5 mB jpeg)

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 261 by 162 kilometers (162 by 100 miles) Location: 20.8 degrees North latitude, 89.3 degrees West longitude Orientation: North toward the top, Mercator projection Image Data: shaded and colored SRTM elevation model Original Data Resolution: SRTM 1 arcsecond (about 30 meters or 98 feet) Date Acquired: February 2000

  4. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  5. Quantifying the effect of colorization enhancement on mammogram images

    NASA Astrophysics Data System (ADS)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  6. True color blood flow imaging using a high-speed laser photography system

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi

    2012-10-01

    Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.

  7. Improvement to the scanning electron microscope image adaptive Canny optimization colorization by pseudo-mapping.

    PubMed

    Lo, T Y; Sim, K S; Tso, C P; Nia, M E

    2014-01-01

    An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.

  8. [Image Feature Extraction and Discriminant Analysis of Xinjiang Uygur Medicine Based on Color Histogram].

    PubMed

    Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat

    2015-06-01

    Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.

  9. Green disease in optical coherence tomography diagnosis of glaucoma.

    PubMed

    Sayed, Mohamed S; Margolis, Michael; Lee, Richard K

    2017-03-01

    Optical coherence tomography (OCT) has become an integral component of modern glaucoma practice. Utilizing color codes, OCT analysis has rendered glaucoma diagnosis and follow-up simpler and faster for the busy clinician. However, green labeling of OCT parameters suggesting normal values may confer a false sense of security, potentially leading to missed diagnoses of glaucoma and/or glaucoma progression. Conditions in which OCT color coding may be falsely negative (i.e., green disease) are identified. Early glaucoma in which retinal nerve fiber layer (RNFL) thickness and optic disc parameters, albeit labeled green, are asymmetric in both eyes may result in glaucoma being undetected. Progressively decreasing RNFL thickness may reveal the presence of progressive glaucoma that, because of green labeling, can be missed by the clinician. Other ocular conditions that can increase RNFL thickness can make the diagnosis of coexisting glaucoma difficult. Recently introduced progression analysis features of OCT may help detect green disease. Recognition of green disease is of paramount importance in diagnosing and treating glaucoma. Understanding the limitations of imaging technologies coupled with evaluation of serial OCT analyses, prompt clinical examination, and structure-function correlation is important to avoid missing real glaucoma requiring treatment.

  10. Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-03-01

    High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.

  11. Chromatic aberration and the roles of double-opponent and color-luminance neurons in color vision.

    PubMed

    Vladusich, Tony

    2007-03-01

    How does the visual cortex encode color? I summarize a theory in which cortical double-opponent color neurons perform a role in color constancy and a complementary set of color-luminance neurons function to selectively correct for color fringes induced by chromatic aberration in the eye. The theory may help to resolve an ongoing debate concerning the functional properties of cortical receptive fields involved in color coding.

  12. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  13. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  14. Anodizing color coded anodized Ti6Al4V medical devices for increasing bone cell functions

    PubMed Central

    Ross, Alexandra P; Webster, Thomas J

    2013-01-01

    Current titanium-based implants are often anodized in sulfuric acid (H2SO4) for color coding purposes. However, a crucial parameter in selecting the material for an orthopedic implant is the degree to which it will integrate into the surrounding bone. Loosening at the bone–implant interface can cause catastrophic failure when motion occurs between the implant and the surrounding bone. Recently, a different anodization process using hydrofluoric acid has been shown to increase bone growth on commercially pure titanium and titanium alloys through the creation of nanotubes. The objective of this study was to compare, for the first time, the influence of anodizing a titanium alloy medical device in sulfuric acid for color coding purposes, as is done in the orthopedic implant industry, followed by anodizing the device in hydrofluoric acid to implement nanotubes. Specifically, Ti6Al4V model implant samples were anodized first with sulfuric acid to create color-coding features, and then with hydrofluoric acid to implement surface features to enhance osteoblast functions. The material surfaces were characterized by visual inspection, scanning electron microscopy, contact angle measurements, and energy dispersive spectroscopy. Human osteoblasts were seeded onto the samples for a series of time points and were measured for adhesion and proliferation. After 1 and 2 weeks, the levels of alkaline phosphatase activity and calcium deposition were measured to assess the long-term differentiation of osteoblasts into the calcium depositing cells. The results showed that anodizing in hydrofluoric acid after anodizing in sulfuric acid partially retains color coding and creates unique surface features to increase osteoblast adhesion, proliferation, alkaline phosphatase activity, and calcium deposition. In this manner, this study provides a viable method to anodize an already color coded, anodized titanium alloy to potentially increase bone growth for numerous implant applications. PMID:23319862

  15. Anodizing color coded anodized Ti6Al4V medical devices for increasing bone cell functions.

    PubMed

    Ross, Alexandra P; Webster, Thomas J

    2013-01-01

    Current titanium-based implants are often anodized in sulfuric acid (H(2)SO(4)) for color coding purposes. However, a crucial parameter in selecting the material for an orthopedic implant is the degree to which it will integrate into the surrounding bone. Loosening at the bone-implant interface can cause catastrophic failure when motion occurs between the implant and the surrounding bone. Recently, a different anodization process using hydrofluoric acid has been shown to increase bone growth on commercially pure titanium and titanium alloys through the creation of nanotubes. The objective of this study was to compare, for the first time, the influence of anodizing a titanium alloy medical device in sulfuric acid for color coding purposes, as is done in the orthopedic implant industry, followed by anodizing the device in hydrofluoric acid to implement nanotubes. Specifically, Ti6Al4V model implant samples were anodized first with sulfuric acid to create color-coding features, and then with hydrofluoric acid to implement surface features to enhance osteoblast functions. The material surfaces were characterized by visual inspection, scanning electron microscopy, contact angle measurements, and energy dispersive spectroscopy. Human osteoblasts were seeded onto the samples for a series of time points and were measured for adhesion and proliferation. After 1 and 2 weeks, the levels of alkaline phosphatase activity and calcium deposition were measured to assess the long-term differentiation of osteoblasts into the calcium depositing cells. The results showed that anodizing in hydrofluoric acid after anodizing in sulfuric acid partially retains color coding and creates unique surface features to increase osteoblast adhesion, proliferation, alkaline phosphatase activity, and calcium deposition. In this manner, this study provides a viable method to anodize an already color coded, anodized titanium alloy to potentially increase bone growth for numerous implant applications.

  16. Astronomy with the color blind

    NASA Astrophysics Data System (ADS)

    Smith, Donald A.; Melrose, Justyn

    2014-12-01

    The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the field, although one should be cautious in assuming that such an image shows what the subject would "really look like" if a person could see it without the aid of a telescope. The details of how the eye processes light have a significant impact on how such images should be understood, and the step from perception to interpretation is even more problematic when the viewer is color blind. We report here on an approach to manipulating stacked tricolor images that, while abandoning attempts to portray the color distribution "realistically," do result in enabling those suffering from deuteranomaly (the most common form of color blindness) to perceive color distinctions they would otherwise not be able to see.

  17. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  18. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images

    PubMed Central

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit

    2015-01-01

    Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571

  19. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    PubMed

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit

    2015-01-01

    Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  20. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    NASA Astrophysics Data System (ADS)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  1. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  2. Java-based remote viewing and processing of nuclear medicine images: toward "the imaging department without walls".

    PubMed

    Slomka, P J; Elliott, E; Driedger, A A

    2000-01-01

    In nuclear medicine practice, images often need to be reviewed and reports prepared from locations outside the department, usually in the form of hard copy. Although hard-copy images are simple and portable, they do not offer electronic data search and image manipulation capabilities. On the other hand, picture archiving and communication systems or dedicated workstations cannot be easily deployed at numerous locations. To solve this problem, we propose a Java-based remote viewing station (JaRViS) for the reading and reporting of nuclear medicine images using Internet browser technology. JaRViS interfaces to the clinical patient database of a nuclear medicine workstation. All JaRViS software resides on a nuclear medicine department server. The contents of the clinical database can be searched by a browser interface after providing a password. Compressed images with the Java applet and color lookup tables are downloaded on the client side. This paradigm does not require nuclear medicine software to reside on remote computers, which simplifies support and deployment of such a system. To enable versatile reporting of the images, color tables and thresholds can be interactively manipulated and images can be displayed in a variety of layouts. Image filtering, frame grouping (adding frames), and movie display are available. Tomographic mode displays are supported, including gated SPECT. The time to display 14 lung perfusion images in 128 x 128 matrix together with the Java applet and color lookup tables over a V.90 modem is <1 min. SPECT and PET slice reorientation is interactive (<1 s). JaRViS could run on a Windows 95/98/NT or a Macintosh platform with Netscape Communicator or Microsoft Intemet Explorer. The performance of Java code for bilinear interpolation, cine display, and filtering approaches that of a standard imaging workstation. It is feasible to set up a remote nuclear medicine viewing station using Java and an Internet or intranet browser. Images can be made easily and cost-effectively available to referring physicians and ambulatory clinics within and outside of the hospital, providing a convenient alternative to film media. We also find this system useful in home reporting of emergency procedures such as lung ventilation-perfusion scans or dynamic studies.

  3. Color engineering in the age of digital convergence

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.

    1998-09-01

    Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.

  4. Efficient color correction method for smartphone camera-based health monitoring application.

    PubMed

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  5. False Color Mosaic Great Red Spot

    NASA Technical Reports Server (NTRS)

    1996-01-01

    False color representation of Jupiter's Great Red Spot (GRS) taken through three different near-infrared filters of the Galileo imaging system and processed to reveal cloud top height. Images taken through Galileo's near-infrared filters record sunlight beyond the visible range that penetrates to different depths in Jupiter's atmosphere before being reflected by clouds. The Great Red Spot appears pink and the surrounding region blue because of the particular color coding used in this representation. Light reflected by Jupiter at a wavelength (886 nm) where methane strongly absorbs is shown in red. Due to this absorption, only high clouds can reflect sunlight in this wavelength. Reflected light at a wavelength (732 nm) where methane absorbs less strongly is shown in green. Lower clouds can reflect sunlight in this wavelength. Reflected light at a wavelength (757 nm) where there are essentially no absorbers in the Jovian atmosphere is shown in blue: This light is reflected from the deepest clouds. Thus, the color of a cloud in this image indicates its height. Blue or black areas are deep clouds; pink areas are high, thin hazes; white areas are high, thick clouds. This image shows the Great Red Spot to be relatively high, as are some smaller clouds to the northeast and northwest that are surprisingly like towering thunderstorms found on Earth. The deepest clouds are in the collar surrounding the Great Red Spot, and also just to the northwest of the high (bright) cloud in the northwest corner of the image. Preliminary modeling shows these cloud heights vary over 30 km in altitude. This mosaic, of eighteen images (6 in each filter) taken over a 6 minute interval during the second GRS observing sequence on June 26, 1996, has been map-projected to a uniform grid of latitude and longitude. North is at the top.

    Launched in October 1989, Galileo entered orbit around Jupiter on December 7, 1995. The spacecraft's mission is to conduct detailed studies of the giant planet, its largest moons and the Jovian magnetic environment. The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  6. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †

    PubMed Central

    Kiku, Daisuke; Okutomi, Masatoshi

    2017-01-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407

  7. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    PubMed

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  8. Spatial imaging in color and HDR: prometheus unchained

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  9. Perceptions of drug color among drug sellers and consumers in rural southwestern Nigeria.

    PubMed

    Brieger, William R; Salami, Kabiru K; Oshiname, Frederick O

    2007-09-01

    Color is commonly used for branding and coding consumer products including medications. People associate certain colors in tablets and capsules with the effect of the drug and the illness for which it is meant. Color coding was introduced in age-specific prepacked antimalarial drugs for preschool aged children in Nigeria by the National Malaria Control Committee. Yellow was designated for the younger ages and blue for the older. The National Malaria Control Committee did not perform market research to learn how their color codes would be perceived by consumers. The study aimed at determining perceptions of both consumers and sellers of medicines at the community level to learn about color likes and dislikes that might influence acceptance of new color-coded child prepacks of antimalarial drugs. Qualitative methods were used to determine perceptions of drug colors. A series of focus group interviews were conducted with male and female community members, and in-depth interviews were held with medicine sellers in the Igbo-Ora community in southwestern Nigeria. Respondents clearly associated medicines with their effects and purpose, for example white drugs for pain relief, red for building blood, blue to aid sleep, and yellow for malaria treatment. Medicine vendors had a low opinion of white colored medicines, but community members were ultimately more concerned about efficacy. The perceived association between yellow and malaria, because of local symptom perceptions of eyes turning yellowish during malaria, yielded a favorable response when consumers were shown the yellow prepacks. The response to blue was noncommittal but consumers indicated that if they were properly educated on the efficacy and function of the new drugs they would likely buy them. Community members will accept yellow as an antimalarial drug but health education will be needed for promoting the idea of blue for malaria and the notion of age-specific packets. Therefore, the strong medicine vendor-training component that accompanied roll out of these prepacks in the pilot states needs to be replicated nationally.

  10. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    PubMed

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. High-chroma visual cryptography using interference color of high-order retarder films

    NASA Astrophysics Data System (ADS)

    Sugawara, Shiori; Harada, Kenji; Sakai, Daisuke

    2015-08-01

    Visual cryptography can be used as a method of sharing a secret image through several encrypted images. Conventional visual cryptography can display only monochrome images. We have developed a high-chroma color visual encryption technique using the interference color of high-order retarder films. The encrypted films are composed of a polarizing film and retarder films. The retarder films exhibit interference color when they are sandwiched between two polarizing films. We propose a stacking technique for displaying high-chroma interference color images. A prototype visual cryptography device using high-chroma interference color is developed.

  12. South San Francisco Bay, California

    USGS Publications Warehouse

    Dartnell, Peter; Gibbons, Helen

    2007-01-01

    View eastward. Elevations in mapped area color coded: purple (approx 15 m below sea level) to red-orange (approx 90 m above sea level). South San Francisco Bay is very shallow, with a mean water depth of 2.7 m (8.9 ft). Trapezoidal depression near San Mateo Bridge is where sediment has been extracted for use in cement production and as bay fill. Land from USGS digital orthophotographs (DOQs) overlaid on USGS digital elevation models (DEMs). Distance across bottom of image approx 11 km (7 mi); vertical exaggeration 1.5X.

  13. ImageX: new and improved image explorer for astronomical images and beyond

    NASA Astrophysics Data System (ADS)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan images, and its featureset to include basic functions like image overlay and colormaps. Users needing more advanced visualization and analysis capabilities could use a desktop tool like DS9+IRAF on another IU Trident project called StarDock, without having to download Gigabytes of FITS image data.

  14. High-intensity focused ultrasound ablation assisted using color Doppler imaging for the treatment of hepatocellular carcinomas.

    PubMed

    Fukuda, Hiroyuki; Numata, Kazushi; Nozaki, Akito; Kondo, Masaaki; Morimoto, Manabu; Maeda, Shin; Tanaka, Katsuaki; Ohto, Masao; Ito, Ryu; Ishibashi, Yoshiharu; Oshima, Noriyoshi; Ito, Ayao; Zhu, Hui; Wang, Zhi-Biao

    2013-12-01

    We evaluated the usefulness of color Doppler flow imaging to compensate for the inadequate resolution of the ultrasound (US) monitoring during high-intensity focused ultrasound (HIFU) for the treatment of hepatocellular carcinoma (HCC). US-guided HIFU ablation assisted using color Doppler flow imaging was performed in 11 patients with small HCC (<3 lesions, <3 cm in diameter). The HIFU system (Chongqing Haifu Tech) was used under US guidance. Color Doppler sonographic studies were performed using an HIFU 6150S US imaging unit system and a 2.7-MHz electronic convex probe. The color Doppler images were used because of the influence of multi-reflections and the emergence of hyperecho. In 1 of the 11 patients, multi-reflections were responsible for the poor visualization of the tumor. In 10 cases, the tumor was poorly visualized because of the emergence of a hyperecho. In these cases, the ability to identify the original tumor location on the monitor by referencing the color Doppler images of the portal vein and the hepatic vein was very useful. HIFU treatments were successfully performed in all 11 patients with the assistance of color Doppler imaging. Color Doppler imaging is useful for the treatment of HCC using HIFU, compensating for the occasionally poor visualization provided by B-mode conventional US imaging.

  15. Determination of tire cross-sectional geometric characteristics from a digitally scanned image

    NASA Astrophysics Data System (ADS)

    Danielson, Kent T.

    1995-08-01

    A semi-automated procedure is described for the accurate determination of geometrical characteristics using a scanned image of the tire cross-section. The procedure can be useful for cases when CAD drawings are not available or when a description of the actual cured tire is desired. Curves representing the perimeter of the tire cross-section are determined by an edge tracing scheme, and the plyline and cord-end positions are determined by locations of color intensities. The procedure provides an accurate description of the perimeter of the tire cross-section and the locations of plylines and cord-ends. The position, normals, and curvatures of the cross-sectional surface are included in this description. The locations of the plylines provide the necessary information for determining the ply thicknesses and relative position to a reference surface. Finally, the locations of the cord-ends provide a means to calculate the cord-ends per inch (epi). Menu driven software has been developed to facilitate the procedure using the commercial code, PV-Wave by Visual Numerics, Inc., to display the images. From a single user interface, separate modules are executed for image enhancement, curve fitting the edge trace of the cross-sectional perimeter, and determining the plyline and cord-end locations. The code can run on SUN or SGI workstations and requires the use of a mouse to specify options or identify items on the scanned image.

  16. Determination of tire cross-sectional geometric characteristics from a digitally scanned image

    NASA Technical Reports Server (NTRS)

    Danielson, Kent T.

    1995-01-01

    A semi-automated procedure is described for the accurate determination of geometrical characteristics using a scanned image of the tire cross-section. The procedure can be useful for cases when CAD drawings are not available or when a description of the actual cured tire is desired. Curves representing the perimeter of the tire cross-section are determined by an edge tracing scheme, and the plyline and cord-end positions are determined by locations of color intensities. The procedure provides an accurate description of the perimeter of the tire cross-section and the locations of plylines and cord-ends. The position, normals, and curvatures of the cross-sectional surface are included in this description. The locations of the plylines provide the necessary information for determining the ply thicknesses and relative position to a reference surface. Finally, the locations of the cord-ends provide a means to calculate the cord-ends per inch (epi). Menu driven software has been developed to facilitate the procedure using the commercial code, PV-Wave by Visual Numerics, Inc., to display the images. From a single user interface, separate modules are executed for image enhancement, curve fitting the edge trace of the cross-sectional perimeter, and determining the plyline and cord-end locations. The code can run on SUN or SGI workstations and requires the use of a mouse to specify options or identify items on the scanned image.

  17. Calibration Image of Earth by Mars Color Imager

    NASA Image and Video Library

    2005-08-22

    Three days after the Mars Reconnaissance Orbiter Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon.

  18. Gale Crater - False Color

    NASA Image and Video Library

    2017-02-15

    The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Gale Crater. Basaltic sands are dark blue in this type of false color combination. The Curiosity Rover is located in another portion of Gale Crater, far southwest of this image. Orbit Number: 51803 Latitude: -4.39948 Longitude: 138.116 Instrument: VIS Captured: 2013-08-18 09:04 http://photojournal.jpl.nasa.gov/catalog/PIA21312

  19. Adaptive enhancement for nonuniform illumination images via nonlinear mapping

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Huang, Qian; Hu, Jing

    2017-09-01

    Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.

  20. 7 CFR 28.525 - Symbols and code numbers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...

  1. 7 CFR 28.525 - Symbols and code numbers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...

  2. 7 CFR 28.525 - Symbols and code numbers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...

  3. 7 CFR 28.525 - Symbols and code numbers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...

  4. 7 CFR 28.525 - Symbols and code numbers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...

  5. Learning about Probability from Text and Tables: Do Color Coding and Labeling through an Interactive-User Interface Help?

    ERIC Educational Resources Information Center

    Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.

    2016-01-01

    Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…

  6. 78 FR 18611 - Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-27

    ...] Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments AGENCY: Food and...: The Food and Drug Administration (FDA) and cosponsor International Color Consortium (ICC) are announcing the following public workshop entitled ``Summit on Color in Medical Imaging: An International...

  7. Cone Photoreceptor Structure in Patients With X-Linked Cone Dysfunction and Red-Green Color Vision Deficiency.

    PubMed

    Patterson, Emily J; Wilk, Melissa; Langlo, Christopher S; Kasilian, Melissa; Ring, Michael; Hufnagel, Robert B; Dubis, Adam M; Tee, James J; Kalitzeos, Angelos; Gardner, Jessica C; Ahmed, Zubair M; Sisk, Robert A; Larsen, Michael; Sjoberg, Stacy; Connor, Thomas B; Dubra, Alfredo; Neitz, Jay; Hardcastle, Alison J; Neitz, Maureen; Michaelides, Michel; Carroll, Joseph

    2016-07-01

    Mutations in the coding sequence of the L and M opsin genes are often associated with X-linked cone dysfunction (such as Bornholm Eye Disease, BED), though the exact color vision phenotype associated with these disorders is variable. We examined individuals with L/M opsin gene mutations to clarify the link between color vision deficiency and cone dysfunction. We recruited 17 males for imaging. The thickness and integrity of the photoreceptor layers were evaluated using spectral-domain optical coherence tomography. Cone density was measured using high-resolution images of the cone mosaic obtained with adaptive optics scanning light ophthalmoscopy. The L/M opsin gene array was characterized in 16 subjects, including at least one subject from each family. There were six subjects with the LVAVA haplotype encoded by exon 3, seven with LIAVA, two with the Cys203Arg mutation encoded by exon 4, and two with a novel insertion in exon 2. Foveal cone structure and retinal thickness was disrupted to a variable degree, even among related individuals with the same L/M array. Our findings provide a direct link between disruption of the cone mosaic and L/M opsin variants. We hypothesize that, in addition to large phenotypic differences between different L/M opsin variants, the ratio of expression of first versus downstream genes in the L/M array contributes to phenotypic diversity. While the L/M opsin mutations underlie the cone dysfunction in all of the subjects tested, the color vision defect can be caused either by the same mutation or a gene rearrangement at the same locus.

  8. The Role of Color and Morphologic Characteristics in Dermoscopic Diagnosis.

    PubMed

    Bajaj, Shirin; Marchetti, Michael A; Navarrete-Dechent, Cristian; Dusza, Stephen W; Kose, Kivanc; Marghoob, Ashfaq A

    2016-06-01

    Both colors and structures are considered important in the dermoscopic evaluation of skin lesions but their relative significance is unknown. To determine if diagnostic accuracy for common skin lesions differs between gray-scale and color dermoscopic images. A convenience sample of 40 skin lesions (8 nevi, 8 seborrheic keratoses, 7 basal cell carcinomas, 7 melanomas, 4 hemangiomas, 4 dermatofibromas, 2 squamous cell carcinomas [SCCs]) was selected and shown to attendees of a dermoscopy course (2014 Memorial Sloan Kettering Cancer Center dermoscopy course). Twenty lesions were shown only once, either in gray-scale (n = 10) or color (n = 10) (nonpaired). Twenty lesions were shown twice, once in gray-scale (n = 20) and once in color (n = 20) (paired). Participants provided their diagnosis and confidence level for each of the 60 images. Of the 261 attendees, 158 participated (60.5%) in the study. Most were attending physicians (n = 76 [48.1%]). Most participants were practicing or training in dermatology (n = 144 [91.1%]). The median (interquartile range) experience evaluating skin lesions and using dermoscopy of participants was 6 (13.5) and 2 (4.0) years, respectively. Diagnostic accuracy and confidence level of participants evaluating gray-scale and color images. Two separate analyses were performed: (1) an unpaired evaluation comparing gray-scale and color images shown either once or for the first time, and (2) a paired evaluation comparing pairs of gray-scale and color images of the same lesion. In univariate analysis of unpaired images, color images were less likely to be diagnosed correctly compared with gray-scale images (odds ratio [OR], 0.8; P < .001). Using gray-scale images as the reference, multivariate analyses of both unpaired and paired images found no association between correct lesion diagnosis and use of color images (OR, 1.0; P = .99, and OR, 1.2; P = .82, respectively). Stratified analysis of paired images using a color by diagnosis interaction term showed that participants were more likely to make a correct diagnosis of SCC and hemangioma in color (P < .001 for both comparisons) and dermatofibroma in gray-scale (P < .001). Morphologic characteristics (ie, structures and patterns), not color, provide the primary diagnostic clue in dermoscopy. Use of gray-scale images may improve teaching of dermoscopy to novices by emphasizing the evaluation of morphology.

  9. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  10. What's color got to do with it? The influence of color on visual attention in different categories.

    PubMed

    Frey, Hans-Peter; Honey, Christian; König, Peter

    2008-10-23

    Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention.

  11. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  12. Effective method for detecting regions of given colors and the features of the region surfaces

    NASA Astrophysics Data System (ADS)

    Gong, Yihong; Zhang, HongJiang

    1994-03-01

    Color can be used as a very important cue for image recognition. In industrial and commercial areas, color is widely used as a trademark or identifying feature in objects, such as packaged goods, advertising signs, etc. In image database systems, one may retrieve an image of interest by specifying prominent colors and their locations in the image (image retrieval by contents). These facts enable us to detect or identify a target object using colors. However, this task depends mainly on how effectively we can identify a color and detect regions of the given color under possibly non-uniform illumination conditions such as shade, highlight, and strong contrast. In this paper, we present an effective method to detect regions matching given colors, along with the features of the region surfaces. We adopt the HVC color coordinates in the method because of its ability of completely separating the luminant and chromatic components of colors. Three basis functions functionally serving as the low-pass, high-pass, and band-pass filters, respectively, are introduced.

  13. Color standardization and optimization in whole slide imaging.

    PubMed

    Yagi, Yukako

    2011-03-30

    Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.

  14. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  15. Spontaneous Recanalization After Carotid Artery Dissection: The Case for an Ultrasound-Only Monitoring Strategy

    PubMed Central

    Lumsden, Sarah; Rosta, Gabor; Bismuth, Jean; Lumsden, Alan B.; Garami, Zsolt

    2017-01-01

    Dissection of the internal carotid artery (ICA) accounts for 5% to 25% of ischemic strokes in young adults. We report a case of spontaneous recanalization of a traumatic ICA dissection in which carotid duplex (CDU) and transcranial color-coded duplex ultrasound (TCCD) were used. A 47-year-old male presented with intermittent episodes of headache, blurry vision, anisocoria, and loss of taste sensation following a whiplash injury while body surfing. Magnetic resonance angiogram (MRA) of the neck revealed absent flow in the cavernous ICA and a clot at the skull base. Carotid duplex, used to further evaluate flow, demonstrated reverberating color Doppler and spectrum signal. A TCCD showed ICA occlusion and smaller-caliber intracranial ICA. The patient reported for follow-up after 1 month on anticoagulation therapy. Upon his return, CDU and TCCD were normal and the ICA showed normal color and spectrum signals. Computed tomography angiogram confirmed ultrasound findings of a dramatic improvement of ICA patency. Additionally, the patient reported that his headaches had resolved. Extracranial CDU and TCCD are useful for monitoring patient progress in cases of spontaneous recanalization following carotid artery dissection. These inexpensive and noninvasive imaging modalities proved to be critical in the initial and follow-up evaluations of the extracranial and intracranial vascular system, providing a strong alternative to expensive magnetic resonance imaging and invasive angiograms and offering more hemodynamic information than “static” MRA. PMID:29744017

  16. Asteroid Euphrosyne as Seen by WISE

    NASA Image and Video Library

    2015-08-03

    The asteroid Euphrosyne glides across a field of background stars in this time-lapse view from NASA's WISE spacecraft. WISE obtained the images used to create this view over a period of about a day around May 17, 2010, during which it observed the asteroid four times. Because WISE (renamed NEOWISE in 2013) is an infrared telescope, it senses heat from asteroids. Euphrosyne is quite dark in visible light, but glows brightly at infrared wavelengths. This view is a composite of images taken at four different infrared wavelengths: 3.4 microns (color-coded blue), 4.6 microns (cyan), 12 microns (green) and 22 microns (red). The moving asteroid appears as a string of red dots because it is much cooler than the distant background stars. Stars have temperatures in the thousands of degrees, but the asteroid is cooler than room temperature. Thus the stars are represented by shorter wavelength (hotter) blue colors in this view, while the asteroid is shown in longer wavelength (cooler) reddish colors. The WISE spacecraft was put into hibernation in 2011 upon completing its goal of surveying the entire sky in infrared light. WISE cataloged three quarters of a billion objects, including asteroids, stars and galaxies. In August 2013, NASA decided to reinstate the spacecraft on a mission to find and characterize more asteroids. http://photojournal.jpl.nasa.gov/catalog/PIA19645

  17. Color Retinal Image Enhancement Based on Luminosity and Contrast Adjustment.

    PubMed

    Zhou, Mei; Jin, Kai; Wang, Shaoze; Ye, Juan; Qian, Dahong

    2018-03-01

    Many common eye diseases and cardiovascular diseases can be diagnosed through retinal imaging. However, due to uneven illumination, image blurring, and low contrast, retinal images with poor quality are not useful for diagnosis, especially in automated image analyzing systems. Here, we propose a new image enhancement method to improve color retinal image luminosity and contrast. A luminance gain matrix, which is obtained by gamma correction of the value channel in the HSV (hue, saturation, and value) color space, is used to enhance the R, G, and B (red, green and blue) channels, respectively. Contrast is then enhanced in the luminosity channel of L * a * b * color space by CLAHE (contrast-limited adaptive histogram equalization). Image enhancement by the proposed method is compared to other methods by evaluating quality scores of the enhanced images. The performance of the method is mainly validated on a dataset of 961 poor-quality retinal images. Quality assessment (range 0-1) of image enhancement of this poor dataset indicated that our method improved color retinal image quality from an average of 0.0404 (standard deviation 0.0291) up to an average of 0.4565 (standard deviation 0.1000). The proposed method is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness. This method of color retinal image enhancement may be employed to assist ophthalmologists in more efficient screening of retinal diseases and in development of improved automated image analysis for clinical diagnosis.

  18. Wide spectral-range imaging spectroscopy of photonic crystal microbeads for multiplex biomolecular assay applications

    NASA Astrophysics Data System (ADS)

    Li, Jianping

    2014-05-01

    Suspension assay using optically color-encoded microbeads is a novel way to increase the reaction speed and multiplex of biomolecular detection and analysis. To boost the detection speed, a hyperspectral imaging (HSI) system is of great interest for quickly decoding the color codes of the microcarriers. Imaging Fourier transform spectrometer (IFTS) is a potential candidate for this task due to its advantages in HSI measurement. However, conventional IFTS is only popular in IR spectral bands because it is easier to track its scanning mirror position in longer wavelengths so that the fundamental Nyquist criterion can be satisfied when sampling the interferograms; the sampling mechanism for shorter wavelengths IFTS used to be very sophisticated, high-cost and bulky. In order to overcome this handicap and take better usage of its advantages for HSI applications, a new wide spectral range IFTS platform is proposed based on an optical beam-folding position-tracking technique. This simple technique has successfully extended the spectral range of an IFTS to cover 350-1000nm. Test results prove that the system has achieved good spectral and spatial resolving performances with instrumentation flexibilities. Accurate and fast measurement results on novel colloidal photonic crystal microbeads also demonstrate its practical potential for high-throughput and multiplex suspension molecular assays.

  19. Passive ranging redundancy reduction in diurnal weather conditions

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Abbott, A. Lynn; Szu, Harold H.

    2013-05-01

    Ambiguity in binocular ranging (David Marr's paradox) may be resolved by using two eyes moving from side to side behind an optical bench while integrating multiple views. Moving a head from left to right with one eye closed can also help resolve the foreground and background range uncertainty. That empirical experiment implies redundancy in image data, which may be reduced by adopting a 3-D camera imaging model to perform compressive sensing. Here, the compressive sensing concept is examined from the perspective of redundancy reduction in images subject to diurnal and weather variations for the purpose of resolving range uncertainty at all weather conditions such as the dawn or dusk, the daytime with different light level or the nighttime at different spectral band. As an example, a scenario at an intersection of a country road at dawn/dusk is discussed where the location of the traffic signs needs to be resolved by passive ranging to answer whether it is located on the same side of the road or the opposite side, which is under the influence of temporal light/color level variation. A spectral band extrapolation via application of Lagrange Constrained Neural Network (LCNN) learning algorithm is discussed to address lost color restoration at dawn/dusk. A numerical simulation is illustrated along with the code example.

  20. Quantitative characterization of color Doppler images: reproducibility, accuracy, and limitations.

    PubMed

    Delorme, S; Weisser, G; Zuna, I; Fein, M; Lorenz, A; van Kaick, G

    1995-01-01

    A computer-based quantitative analysis for color Doppler images of complex vascular formations is presented. The red-green-blue-signal from an Acuson XP10 is frame-grabbed and digitized. By matching each image pixel with the color bar, color pixels are identified and assigned to the corresponding flow velocity (color value). Data analysis consists of delineation of a region of interest and calculation of the relative number of color pixels in this region (color pixel density) as well as the mean color value. The mean color value was compared to flow velocities in a flow phantom. The thyroid and carotid artery in a volunteer were repeatedly examined by a single examiner to assess intra-observer variability. The thyroids in five healthy controls were examined by three experienced physicians to assess the extent of inter-observer variability and observer bias. The correlation between the mean color value and flow velocity ranged from 0.94 to 0.96 for a range of velocities determined by pulse repetition frequency. The average deviation of the mean color value from the flow velocity was 22% to 41%, depending on the selected pulse repetition frequency (range of deviations, -46% to +66%). Flow velocity was underestimated with inadequately low pulse repetition frequency, or inadequately high reject threshold. An overestimation occurred with inadequately high pulse repetition frequency. The highest intra-observer variability was 22% (relative standard deviation) for the color pixel density, and 9.1% for the mean color value. The inter-observer variation was approximately 30% for the color pixel density, and 20% for the mean color value. In conclusion, computer assisted image analysis permits an objective description of color Doppler images. However, the user must be aware that image acquisition under in vivo conditions as well as physical and instrumental factors may considerably influence the results.

  1. Shaded Relief and Radar Image with Color as Height, Bosporus Strait and Istanbul, Turkey

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Bosporus (also spelled Bosphorus) is a strait that connects the Black Sea with the Sea of Marmara in the center of this view of northwest Turkey, taken during the Shuttle Radar Topography Mission. The water of the Black Sea at the top of the image and Sea of Marmara below the center are colored blue in this image, along with several large lakes. The largest lake, to the lower right of the Sea of Marmara, is Iznik Lake. The Bosporus (Turkish Bogazici) Strait is considered to be the boundary between Europe and Asia, and the large city of Istanbul, Turkey is located on both sides of the southern end of the strait, visible as a brighter (light green to white) area on the image due to its stronger reflection of radar. Istanbul is the modern name for a city with along history, previously called Constantinople and Byzantium. It was rebuilt as the capital of the Roman Empire in 330 A.D. by Constantine on the site of an earlier Greek city, and it was later the capital of the Byzantine and Ottoman empires until 1922.

    The Gulf of Izmit is the narrow gulf extending to the east (right) from the Sea of Marmara. The city of Izmit at the end of the gulf was heavily damaged by a large magnitude 7.4 earthquake on August 17,1999, often called the Izmit earthquake (also known as the Kocaeli, Turkey, earthquake), that killed at least 17,000 people. A previous earthquake under the Gulf of Izmit in 1754 killed at least 2,000people. The Izmit earthquake ruptured a long section of the North Anatolian Fault system from off the right side of this image continuing under the Gulf of Izmit. Another strand of the North Anatolian Fault system is visible as a sharp linear feature in the topography south of Iznik Lake. Bathymetric surveys show that the north Anatolian Fault system extends beneath and has formed the Sea of Marmara, in addition to the Gulf of Izmit and Iznik Lake. Scientists are studying the North Anatolian Fault system to determine the risk of a large earthquake on the faults close to Istanbul that could kill many more than the 1999 event.

    Three visualization methods were combined to produce this image: shading and color coding of topographic height and radar image intensity. The shade image was derived by computing topographic slope in the northwest-southeast direction. Northwest-facing slopes appear dark and southeast-facing slopes appear bright. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and brown to white at the highest elevations. The shade image was combined with the radar intensity image to add detail, especially in the flat areas.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 2x2 degrees (168 by 222 kilometers; 104 by 138 miles) Location: 40-42 degrees North latitude, 28-30 degrees East longitude Orientation: North toward the top Image Data: shaded and colored SRTM elevation model, with SRTM radar intensity added Original Data Resolution: SRTM 1 arcsecond (about 30 meters or 98 feet) Date Acquired: February 2000 (SRTM))

  2. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2017-03-01

    Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.

  3. Demosaiced pixel super-resolution in digital holography for multiplexed computational color imaging on-a-chip (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2017-03-01

    Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.

  4. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  5. Acceleration of color computer-generated hologram from three-dimensional scenes with texture and depth information

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2014-06-01

    We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.

  6. Pet fur color and texture classification

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Mukherjee, Debarghar; Lim, SukHwan; Tretter, Daniel

    2007-01-01

    Object segmentation is important in image analysis for imaging tasks such as image rendering and image retrieval. Pet owners have been known to be quite vocal about how important it is to render their pets perfectly. We present here an algorithm for pet (mammal) fur color classification and an algorithm for pet (animal) fur texture classification. Per fur color classification can be applied as a necessary condition for identifying the regions in an image that may contain pets much like the skin tone classification for human flesh detection. As a result of the evolution, fur coloration of all mammals is caused by a natural organic pigment called Melanin and Melanin has only very limited color ranges. We have conducted a statistical analysis and concluded that mammal fur colors can be only in levels of gray or in two colors after the proper color quantization. This pet fur color classification algorithm has been applied for peteye detection. We also present here an algorithm for animal fur texture classification using the recently developed multi-resolution directional sub-band Contourlet transform. The experimental results are very promising as these transforms can identify regions of an image that may contain fur of mammals, scale of reptiles and feather of birds, etc. Combining the color and texture classification, one can have a set of strong classifiers for identifying possible animals in an image.

  7. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  8. Color constancy using bright-neutral pixels

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Luo, Yupin

    2014-03-01

    An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.

  9. Image Reconstruction for Hybrid True-Color Micro-CT

    PubMed Central

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2013-01-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806

  10. Color transfer algorithm in medical images

    NASA Astrophysics Data System (ADS)

    Wang, Weihong; Xu, Yangfa

    2007-12-01

    In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and brightness between a group of images of a certain organ could be quite different. The quality of these images could bring great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper introduces the principle of this algorithm and uses it in the medical image processing.

  11. Displaying the results of three-dimensional analysis using GRAPE. Part one: vector graphics. [In FORTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, B.E.

    1979-10-01

    GRAPE is a display program for three-dimensional polygon and polyhedral models. It can produce line-drawing and continuous-tone black and white or color images in still frame or movie mode. The code was written specifically to be a post-processor for finite element and finite difference analyses. It runs on the CDC 7600 computer, and is compiled with the LLL FTN system. The allocation of storage is dynamic. There are presently three data paths into the code. The first is the binary inerface from the analyses codes and this with the other databases is described. The second data path is the SAMPPmore » format, and the last is the MOVIE format. The code structure is described first; then the commands are discussed in general terms to try to give the user some feel for what they do. The next section deals with the exact format of the commands by overlay. Then examples are given and discussed. Next, the various output options are covered. 57 figures. (RWR)« less

  12. Global Binary Continuity for Color Face Detection With Complex Background

    NASA Astrophysics Data System (ADS)

    Belavadi, Bhaskar; Mahendra Prashanth, K. V.; Joshi, Sujay S.; Suprathik, N.

    2017-08-01

    In this paper, we propose a method to detect human faces in color images, with complex background. The proposed algorithm makes use of basically two color space models, specifically HSV and YCgCr. The color segmented image is filled uniformly with a single color (binary) and then all unwanted discontinuous lines are removed to get the final image. Experimental results on Caltech database manifests that the purported model is able to accomplish far better segmentation for faces of varying orientations, skin color and background environment.

  13. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  14. Practical color vision tests for air traffic control applicants: en route center and terminal facilities.

    PubMed

    Mertens, H W; Milburn, N J; Collins, W E

    2000-12-01

    Two practical color vision tests were developed and validated for use in screening Air Traffic Control Specialist (ATCS) applicants for work at en route center or terminal facilities. The development of the tests involved careful reproduction/simulation of color-coded materials from the most demanding, safety-critical color task performed in each type of facility. The tests were evaluated using 106 subjects with normal color vision and 85 with color vision deficiency. The en route center test, named the Flight Progress Strips Test (FPST), required the identification of critical red/black coding in computer printing and handwriting on flight progress strips. The terminal option test, named the Aviation Lights Test (ALT), simulated red/green/white aircraft lights that must be identified in night ATC tower operations. Color-coding is a non-redundant source of safety-critical information in both tasks. The FPST was validated by direct comparison of responses to strip reproductions with responses to the original flight progress strips and a set of strips selected independently. Validity was high; Kappa = 0.91 with original strips as the validation criterion and 0.86 with different strips. The light point stimuli of the ALT were validated physically with a spectroradiometer. The reliabilities of the FPST and ALT were estimated with Chronbach's alpha as 0.93 and 0.98, respectively. The high job-relevance, validity, and reliability of these tests increases the effectiveness and fairness of ATCS color vision testing.

  15. Single underwater image enhancement based on color cast removal and visibility restoration

    NASA Astrophysics Data System (ADS)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  16. Color correction optimization with hue regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  17. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  18. Frequency division multiplexed multi-color fluorescence microscope system

    NASA Astrophysics Data System (ADS)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.

  19. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  20. Citrus fruit recognition using color image analysis

    NASA Astrophysics Data System (ADS)

    Xu, Huirong; Ying, Yibin

    2004-10-01

    An algorithm for the automatic recognition of citrus fruit on the tree was developed. Citrus fruits have different color with leaves and branches portions. Fifty-three color images with natural citrus-grove scenes were digitized and analyzed for red, green, and blue (RGB) color content. The color characteristics of target surfaces (fruits, leaves, or branches) were extracted using the range of interest (ROI) tool. Several types of contrast color indices were designed and tested. In this study, the fruit image was enhanced using the (R-B) contrast color index because results show that the fruit have the highest color difference among the objects in the image. A dynamic threshold function was derived from this color model and used to distinguish citrus fruit from background. The results show that the algorithm worked well under frontlighting or backlighting condition. However, there are misclassifications when the fruit or the background is under a brighter sunlight.

  1. An improved quantum watermarking scheme using small-scale quantum circuits and color scrambling

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Zhao, Ya; Xiao, Hong; Cao, Maojun

    2017-05-01

    In order to solve the problem of embedding the watermark into the quantum color image, in this paper, an improved scheme of using small-scale quantum circuits and color scrambling is proposed. Both color carrier image and color watermark image are represented using novel enhanced quantum representation. The image sizes for carrier and watermark are assumed to be 2^{n+1}× 2^{n+2} and 2n× 2n, respectively. At first, the color of pixels in watermark image is scrambled using the controlled rotation gates, and then, the scrambled watermark with 2^n× 2^n image size and 24-qubit gray scale is expanded to an image with 2^{n+1}× 2^{n+2} image size and 3-qubit gray scale. Finally, the expanded watermark image is embedded into the carrier image by the controlled-NOT gates. The extraction of watermark is the reverse process of embedding it into carrier image, which is achieved by applying operations in the reverse order. Simulation-based experimental results show that the proposed scheme is superior to other similar algorithms in terms of three items, visual quality, scrambling effect of watermark image, and noise resistibility.

  2. Astronomy with the Color Blind

    ERIC Educational Resources Information Center

    Smith, Donald A.; Melrose, Justyn

    2014-01-01

    The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the…

  3. Applied learning-based color tone mapping for face recognition in video surveillance system

    NASA Astrophysics Data System (ADS)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  4. Color TV: total variation methods for restoration of vector-valued images.

    PubMed

    Blomgren, P; Chan, T F

    1998-01-01

    We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.

  5. Digital color representation

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1992-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.

  6. Color preservation for tone reproduction and image enhancement

    NASA Astrophysics Data System (ADS)

    Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh

    2014-01-01

    Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.

  7. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  8. Color dithering methods for LEGO-like 3D printing

    NASA Astrophysics Data System (ADS)

    Sun, Pei-Li; Sie, Yuping

    2015-01-01

    Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.

  9. Research of image retrieval technology based on color feature

    NASA Astrophysics Data System (ADS)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.

  10. Guided color consistency optimization for image mosaicking

    NASA Astrophysics Data System (ADS)

    Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li

    2018-01-01

    This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.

  11. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  12. Color-coded prefilled medication syringes decrease time to delivery and dosing errors in simulated prehospital pediatric resuscitations: A randomized crossover trial☆, ☆

    PubMed Central

    Stevens, Allen D.; Hernandez, Caleb; Jones, Seth; Moreira, Maria E.; Blumen, Jason R.; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S.

    2016-01-01

    Background Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. Methods We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded-syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28–39) seconds and 42 (95% CI: 36–51) seconds, respectively (difference = 9 [95% CI: 4–14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference = 39%, 95% CI: 13–61%). Conclusions A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. PMID:26247145

  13. Color-coded prefilled medication syringes decrease time to delivery and dosing errors in simulated prehospital pediatric resuscitations: A randomized crossover trial.

    PubMed

    Stevens, Allen D; Hernandez, Caleb; Jones, Seth; Moreira, Maria E; Blumen, Jason R; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S

    2015-11-01

    Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28-39) seconds and 42 (95% CI: 36-51) seconds, respectively (difference=9 [95% CI: 4-14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference=39%, 95% CI: 13-61%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. A 128×96 Pixel Stack-Type Color Image Sensor: Stack of Individual Blue-, Green-, and Red-Sensitive Organic Photoconductive Films Integrated with a ZnO Thin Film Transistor Readout Circuit

    NASA Astrophysics Data System (ADS)

    Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi

    2011-02-01

    A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.

  15. Dehazed Image Quality Assessment by Haze-Line Theory

    NASA Astrophysics Data System (ADS)

    Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai

    2017-06-01

    Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.

  16. Scalable Inkjet-Based Structural Color Printing by Molding Transparent Gratings on Multilayer Nanostructured Surfaces.

    PubMed

    Jiang, Hao; Kaminska, Bozena

    2018-04-24

    To enable customized manufacturing of structural colors for commercial applications, up-scalable, low-cost, rapid, and versatile printing techniques are highly demanded. In this paper, we introduce a viable strategy for scaling up production of custom-input images by patterning individual structural colors on separate layers, which are then vertically stacked and recombined into full-color images. By applying this strategy on molded-ink-on-nanostructured-surface printing, we present an industry-applicable inkjet structural color printing technique termed multilayer molded-ink-on-nanostructured-surface (M-MIONS) printing, in which structural color pixels are molded on multiple layers of nanostructured surfaces. Transparent colorless titanium dioxide nanoparticles were inkjet-printed onto three separate transparent polymer substrates, and each substrate surface has one specific subwavelength grating pattern for molding the deposited nanoparticles into structural color pixels of red, green, or blue primary color. After index-matching lamination, the three layers were vertically stacked and bonded to display a color image. Each primary color can be printed into a range of different shades controlled through a half-tone process, and full colors were achieved by mixing primary colors from three layers. In our experiments, an image size as big as 10 cm by 10 cm was effortlessly achieved, and even larger images can potentially be printed on recombined grating surfaces. In one application example, the M-MIONS technique was used for printing customizable transparent color optical variable devices for protecting personalized security documents. In another example, a transparent diffractive color image printed with the M-MIONS technique was pasted onto a transparent panel for overlaying colorful information onto one's view of reality.

  17. Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.

    PubMed

    Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir

    2016-08-01

    Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.

  18. Securing Color Fidelity in 3D Architectural Heritage Scenarios.

    PubMed

    Gaiani, Marco; Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio

    2017-10-25

    Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy ('color characterization').

  19. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  20. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

Top