Sample records for color image sensor

  1. Application of passive imaging polarimetry in the discrimination and detection of different color targets of identical shapes using color-blind imaging sensors

    NASA Astrophysics Data System (ADS)

    El-Saba, A. M.; Alam, M. S.; Surpanani, A.

    2006-05-01

    Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.

  2. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  3. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  4. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    PubMed

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. A 128×96 Pixel Stack-Type Color Image Sensor: Stack of Individual Blue-, Green-, and Red-Sensitive Organic Photoconductive Films Integrated with a ZnO Thin Film Transistor Readout Circuit

    NASA Astrophysics Data System (ADS)

    Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi

    2011-02-01

    A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.

  6. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  7. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12

    DTIC Science & Technology

    2015-09-03

    NPP) with the VIIRS sensor package as well as data from the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and...capability • Prepare the NRT Geostationary Ocean Color Imager (GOCI) data stream for integration into operations. • Improvements in sensor...Navy (DON) Environmental Data Records (EDRs) Expeditionary Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical

  8. Illumination adaptation with rapid-response color sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Xinchi; Wang, Quan; Boyer, Kim L.

    2014-09-01

    Smart lighting solutions based on imaging sensors such as webcams or time-of-flight sensors suffer from rising privacy concerns. In this work, we use low-cost non-imaging color sensors to measure local luminous flux of different colors in an indoor space. These sensors have much higher data acquisition rate and are much cheaper than many o_-the-shelf commercial products. We have developed several applications with these sensors, including illumination feedback control and occupancy-driven lighting.

  9. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  10. Active pixel sensors with substantially planarized color filtering elements

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor)

    1999-01-01

    A semiconductor imaging system preferably having an active pixel sensor array compatible with a CMOS fabrication process. Color-filtering elements such as polymer filters and wavelength-converting phosphors can be integrated with the image sensor.

  11. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  12. Single sensor processing to obtain high resolution color component signals

    NASA Technical Reports Server (NTRS)

    Glenn, William E. (Inventor)

    2010-01-01

    A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.

  13. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  14. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  15. CMOS image sensors as an efficient platform for glucose monitoring.

    PubMed

    Devadhasan, Jasmine Pramila; Kim, Sanghyo; Choi, Cheol Soo

    2013-10-07

    Complementary metal oxide semiconductor (CMOS) image sensors have been used previously in the analysis of biological samples. In the present study, a CMOS image sensor was used to monitor the concentration of oxidized mouse plasma glucose (86-322 mg dL(-1)) based on photon count variation. Measurement of the concentration of oxidized glucose was dependent on changes in color intensity; color intensity increased with increasing glucose concentration. The high color density of glucose highly prevented photons from passing through the polydimethylsiloxane (PDMS) chip, which suggests that the photon count was altered by color intensity. Photons were detected by a photodiode in the CMOS image sensor and converted to digital numbers by an analog to digital converter (ADC). Additionally, UV-spectral analysis and time-dependent photon analysis proved the efficiency of the detection system. This simple, effective, and consistent method for glucose measurement shows that CMOS image sensors are efficient devices for monitoring glucose in point-of-care applications.

  16. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    PubMed

    Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan

    2012-01-01

    We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  17. Toward CMOS image sensor based glucose monitoring.

    PubMed

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2012-09-07

    Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.

  18. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  19. CMOS image sensor with organic photoconductive layer having narrow absorption band and proposal of stack type solid-state image sensors

    NASA Astrophysics Data System (ADS)

    Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi

    2006-02-01

    Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.

  20. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.10

    DTIC Science & Technology

    2015-08-25

    Geostationary Ocean Color Imager (GOCI) sensors. AOPS enables exploitation of multiple space-borne ocean color satellite sensors to provide optical...package as well as from the Geostationary Ocean Color Imager (GOCI) sensor aboard the Communication Ocean and Meteorological Satellite (COMS) satellite... GEOstationary Coastal and Air Pollution Events (GEO-CAPE) mission and provided to NRL courtesy of Mike Ondrusek and Zhongping Lee. AOP and IOP data were

  1. Stacked color image sensor using wavelength-selective organic photoconductive films with zinc-oxide thin film transistors as a signal readout circuit

    NASA Astrophysics Data System (ADS)

    Seo, Hokuto; Aihara, Satoshi; Namba, Masakazu; Watabe, Toshihisa; Ohtake, Hiroshi; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Nitta, Hiroshi; Hirao, Takashi

    2010-01-01

    Our group has been developing a new type of image sensor overlaid with three organic photoconductive films, which are individually sensitive to only one of the primary color components (blue (B), green (G), or red (R) light), with the aim of developing a compact, high resolution color camera without any color separation optical systems. In this paper, we firstly revealed the unique characteristics of organic photoconductive films. Only choosing organic materials can tune the photoconductive properties of the film, especially excellent wavelength selectivities which are good enough to divide the incident light into three primary colors. Color separation with vertically stacked organic films was also shown. In addition, the high-resolution of organic photoconductive films sufficient for high-definition television (HDTV) was confirmed in a shooting experiment using a camera tube. Secondly, as a step toward our goal, we fabricated a stacked organic image sensor with G- and R-sensitive organic photoconductive films, each of which had a zinc oxide (ZnO) thin film transistor (TFT) readout circuit, and demonstrated image pickup at a TV frame rate. A color image with a resolution corresponding to the pixel number of the ZnO TFT readout circuit was obtained from the stacked image sensor. These results show the potential for the development of high-resolution prism-less color cameras with stacked organic photoconductive films.

  2. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  3. Color image fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Toet, Alexander

    2003-09-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.

  4. Ocean color imagery: Coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Hovis, W. A.

    1975-01-01

    Investigations into the feasibility of sensing ocean color from high altitude for determination of chlorophyll and sediment distributions were carried out using sensors on NASA aircraft, coordinated with surface measurements carried out by oceanographic vessels. Spectrometer measurements in 1971 and 1972 led to development of an imaging sensor now flying on a NASA U-2 and the Coastal Zone Color Scanner to fly on Nimbus G in 1978. Results of the U-2 effort show the imaging sensor to be of great value in sensing pollutants in the ocean.

  5. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  6. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  7. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  8. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  9. 4K x 2K pixel color video pickup system

    NASA Astrophysics Data System (ADS)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  10. Fully wireless pressure sensor based on endoscopy images

    NASA Astrophysics Data System (ADS)

    Maeda, Yusaku; Mori, Hirohito; Nakagawa, Tomoaki; Takao, Hidekuni

    2018-04-01

    In this paper, the result of developing a fully wireless pressure sensor based on endoscopy images for an endoscopic surgery is reported for the first time. The sensor device has structural color with a nm-scale narrow gap, and the gap is changed by air pressure. The structural color of the sensor is acquired from camera images. Pressure detection can be realized with existing endoscope configurations only. The inner air pressure of the human body should be measured under flexible-endoscope operation using the sensor. Air pressure monitoring, has two important purposes. The first is to quantitatively measure tumor size under a constant air pressure for treatment selection. The second purpose is to prevent the endangerment of a patient due to over transmission of air. The developed sensor was evaluated, and the detection principle based on only endoscopy images has been successfully demonstrated.

  11. Autographic theme extraction

    USGS Publications Warehouse

    Edson, D.; Colvocoresses, Alden P.

    1973-01-01

    Remote-sensor images, including aerial and space photographs, are generally recorded on film, where the differences in density create the image of the scene. With panchromatic and multiband systems the density differences are recorded in shades of gray. On color or color infrared film, with the emulsion containing dyes sensitive to different wavelengths, a color image is created by a combination of color densities. The colors, however, can be separated by filtering or other techniques, and the color image reduced to monochromatic images in which each of the separated bands is recorded as a function of the gray scale.

  12. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  13. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  14. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.

    PubMed

    Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei

    2017-09-22

    The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  15. An ultrasensitive method of real time pH monitoring with complementary metal oxide semiconductor image sensor.

    PubMed

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2015-02-09

    CMOS sensors are becoming a powerful tool in the biological and chemical field. In this work, we introduce a new approach on quantifying various pH solutions with a CMOS image sensor. The CMOS image sensor based pH measurement produces high-accuracy analysis, making it a truly portable and user friendly system. pH indicator blended hydrogel matrix was fabricated as a thin film to the accurate color development. A distinct color change of red, green and blue (RGB) develops in the hydrogel film by applying various pH solutions (pH 1-14). The semi-quantitative pH evolution was acquired by visual read out. Further, CMOS image sensor absorbs the RGB color intensity of the film and hue value converted into digital numbers with the aid of an analog-to-digital converter (ADC) to determine the pH ranges of solutions. Chromaticity diagram and Euclidean distance represent the RGB color space and differentiation of pH ranges, respectively. This technique is applicable to sense the various toxic chemicals and chemical vapors by situ sensing. Ultimately, the entire approach can be integrated into smartphone and operable with the user friendly manner. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Evaluation of an innovative color sensor for space application

    NASA Astrophysics Data System (ADS)

    Cessa, Virginie; Beauvivre, Stéphane; Pittet, Jacques; Dougnac, Virgile; Fasano, M.

    2017-11-01

    We present in this paper an evaluation of an innovative image sensor that provides color information without the need of organic filters. The sensor is a CMOS array with more than 4 millions pixels which filters the incident photons into R, G, and B channels, delivering the full resolution in color. Such a sensor, combining high performance with low power consumption, is of high interest for future space missions. The paper presents the characteristics of the detector as well as the first results of environmental testing.

  17. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  18. Atmospheric correction for hyperspectral ocean color sensors

    NASA Astrophysics Data System (ADS)

    Ibrahim, A.; Ahmad, Z.; Franz, B. A.; Knobelspiesse, K. D.

    2017-12-01

    NASA's heritage Atmospheric Correction (AC) algorithm for multi-spectral ocean color sensors is inadequate for the new generation of spaceborne hyperspectral sensors, such as NASA's first hyperspectral Ocean Color Instrument (OCI) onboard the anticipated Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission. The AC process must estimate and remove the atmospheric path radiance contribution due to the Rayleigh scattering by air molecules and by aerosols from the measured top-of-atmosphere (TOA) radiance. Further, it must also compensate for the absorption by atmospheric gases and correct for reflection and refraction of the air-sea interface. We present and evaluate an improved AC for hyperspectral sensors beyond the heritage approach by utilizing the additional spectral information of the hyperspectral sensor. The study encompasses a theoretical radiative transfer sensitivity analysis as well as a practical application of the Hyperspectral Imager for the Coastal Ocean (HICO) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors.

  19. Visible Wavelength Color Filters Using Dielectric Subwavelength Gratings for Backside-Illuminated CMOS Image Sensor Technologies.

    PubMed

    Horie, Yu; Han, Seunghoon; Lee, Jeong-Yub; Kim, Jaekwan; Kim, Yongsung; Arbabi, Amir; Shin, Changgyun; Shi, Lilong; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Lee, Hong-Seok; Hwang, Sungwoo; Faraon, Andrei

    2017-05-10

    We report transmissive color filters based on subwavelength dielectric gratings that can replace conventional dye-based color filters used in backside-illuminated CMOS image sensor (BSI CIS) technologies. The filters are patterned in an 80 nm-thick poly silicon film on a 115 nm-thick SiO 2 spacer layer. They are optimized for operating at the primary RGB colors, exhibit peak transmittance of 60-80%, and have an almost insensitive response over a ± 20° angular range. This technology enables shrinking of the pixel sizes down to near a micrometer.

  20. A CMOS image sensor with stacked photodiodes for lensless observation system of digital enzyme-linked immunosorbent assay

    NASA Astrophysics Data System (ADS)

    Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun

    2014-01-01

    A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.

  1. Satellite Ocean Color Sensor Design Concepts and Performance Requirements

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.; Meister, Gerhard; Monosmith, Bryan

    2014-01-01

    In late 1978, the National Aeronautics and Space Administration (NASA) launched the Nimbus-7 satellite with the Coastal Zone Color Scanner (CZCS) and several other sensors, all of which provided major advances in Earth remote sensing. The inspiration for the CZCS is usually attributed to an article in Science by Clarke et al. who demonstrated that large changes in open ocean spectral reflectance are correlated to chlorophyll-a concentrations. Chlorophyll-a is the primary photosynthetic pigment in green plants (marine and terrestrial) and is used in estimating primary production, i.e., the amount of carbon fixed into organic matter during photosynthesis. Thus, accurate estimates of global and regional primary production are key to studies of the earth's carbon cycle. Because the investigators used an airborne radiometer, they were able to demonstrate the increased radiance contribution of the atmosphere with altitude that would be a major issue for spaceborne measurements. Since 1978, there has been much progress in satellite ocean color remote sensing such that the technique is well established and is used for climate change science and routine operational environmental monitoring. Also, the science objectives and accompanying methodologies have expanded and evolved through a succession of global missions, e.g., the Ocean Color and Temperature Sensor (OCTS), the Seaviewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Medium Resolution Imaging Spectrometer (MERIS), and the Global Imager (GLI). With each advance in science objectives, new and more stringent requirements for sensor capabilities (e.g., spectral coverage) and performance (e.g., signal-to-noise ratio, SNR) are established. The CZCS had four bands for chlorophyll and aerosol corrections. The Ocean Color Imager (OCI) recommended for the NASA Pre-Aerosol, Cloud, and Ocean Ecosystems (PACE) mission includes 5 nanometers hyperspectral coverage from 350 to 800 nanometers with three additional discrete near infrared (NIR) and shortwave infrared (SWIR) ocean aerosol correction bands. Also, to avoid drift in sensor sensitivity from being interpreted as environmental change, climate change research requires rigorous monitoring of sensor stability. For SeaWiFS, monthly lunar imaging accurately tracked stability at an accuracy of approximately 0.1% that allowed the data to be used for climate studies [2]. It is now acknowledged by the international community that future missions and sensor designs need to accommodate lunar calibrations. An overview of ocean color remote sensing and a review of the progress made in ocean color remote sensing and the variety of research applications derived from global satellite ocean color data are provided. The purpose of this chapter is to discuss the design options for ocean color satellite radiometers, performance and testing criteria, and sensor components (optics, detectors, electronics, etc.) that must be integrated into an instrument concept. These ultimately dictate the quality and quantity of data that can be delivered as a trade against mission cost. Historically, science and sensor technology have advanced in a "leap-frog" manner in that sensor design requirements for a mission are defined many years before a sensor is launched and by the end of the mission, perhaps 15-20 years later, science applications and requirements are well beyond the capabilities of the sensor. Section 3 provides a summary of historical mission science objectives and sensor requirements. This progression is expected to continue in the future as long as sensor costs can be constrained to affordable levels and still allow the incorporation of new technologies without incurring unacceptable risk to mission success. The IOCCG Report Number 13 discusses future ocean biology mission Level-1 requirements in depth.

  2. Narrow-Band Organic Photodiodes for High-Resolution Imaging.

    PubMed

    Han, Moon Gyu; Park, Kyung-Bae; Bulliard, Xavier; Lee, Gae Hwang; Yun, Sungyoung; Leem, Dong-Seok; Heo, Chul-Joon; Yagi, Tadao; Sakurai, Rie; Ro, Takkyun; Lim, Seon-Jeong; Sul, Sangchul; Na, Kyoungwon; Ahn, Jungchak; Jin, Yong Wan; Lee, Sangyoon

    2016-10-05

    There are growing opportunities and demands for image sensors that produce higher-resolution images, even in low-light conditions. Increasing the light input areas through 3D architecture within the same pixel size can be an effective solution to address this issue. Organic photodiodes (OPDs) that possess wavelength selectivity can allow for advancements in this regard. Here, we report on novel push-pull D-π-A dyes specially designed for Gaussian-shaped, narrow-band absorption and the high photoelectric conversion. These p-type organic dyes work both as a color filter and as a source of photocurrents with linear and fast light responses, high sensitivity, and excellent stability, when combined with C60 to form bulk heterojunctions (BHJs). The effectiveness of the OPD composed of the active color filter was demonstrated by obtaining a full-color image using a camera that contained an organic/Si hybrid complementary metal-oxide-semiconductor (CMOS) color image sensor.

  3. High-speed sorting of grains by color and surface texture

    USDA-ARS?s Scientific Manuscript database

    A high-speed, low-cost, image-based sorting device was developed to detect and separate grains with different colors/textures. The device directly combines a complementary metal–oxide–semiconductor (CMOS) color image sensor with a field-programmable gate array (FPGA) that was programmed to execute ...

  4. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  5. A robust color signal processing with wide dynamic range WRGB CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2011-01-01

    We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.

  6. Color sensitivity of the multi-exposure HDR imaging process

    NASA Astrophysics Data System (ADS)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  7. Demosaiced pixel super-resolution in digital holography for multiplexed computational color imaging on-a-chip (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2017-03-01

    Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.

  8. MUNSELL COLOR ANALYSIS OF LANDSAT COLOR-RATIO-COMPOSITE IMAGES OF LIMONITIC AREAS IN SOUTHWEST NEW MEXICO.

    USGS Publications Warehouse

    Kruse, Fred A.

    1984-01-01

    Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.

  9. Convolutional Sparse Coding for RGB+NIR Imaging.

    PubMed

    Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon

    2018-04-01

    Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

  10. Use of a color CMOS camera as a colorimeter

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  11. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12

    DTIC Science & Technology

    2015-09-03

    the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and Meteorological Satellite (COMS) satellite. Additionally, this...this capability works in conjunction with AOPS • Improvements to the AOPS mosaicking capability • Prepare the NRT Geostationary Ocean Color Imager...Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical Data Format (HDF) Integrated Data Processing System (IDPS

  12. Spatial optical crosstalk in CMOS image sensors integrated with plasmonic color filters.

    PubMed

    Yu, Yan; Chen, Qin; Wen, Long; Hu, Xin; Zhang, Hui-Fang

    2015-08-24

    Imaging resolution of complementary metal oxide semiconductor (CMOS) image sensor (CIS) keeps increasing to approximately 7k × 4k. As a result, the pixel size shrinks down to sub-2μm, which greatly increases the spatial optical crosstalk. Recently, plasmonic color filter was proposed as an alternative to conventional colorant pigmented ones. However, there is little work on its size effect and the spatial optical crosstalk in a model of CIS. By numerical simulation, we investigate the size effect of nanocross array plasmonic color filters and analyze the spatial optical crosstalk of each pixel in a Bayer array of a CIS with a pixel size of 1μm. It is found that the small pixel size deteriorates the filtering performance of nanocross color filters and induces substantial spatial color crosstalk. By integrating the plasmonic filters in the low Metal layer in standard CMOS process, the crosstalk reduces significantly, which is compatible to pigmented filters in a state-of-the-art backside illumination CIS.

  13. Color correction pipeline optimization for digital cameras

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  14. Corrections to the MODIS Aqua Calibration Derived From MODIS Aqua Ocean Color Products

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard; Franz, Bryan Alden

    2013-01-01

    Ocean color products such as, e.g., chlorophyll-a concentration, can be derived from the top-of-atmosphere radiances measured by imaging sensors on earth-orbiting satellites. There are currently three National Aeronautics and Space Administration sensors in orbit capable of providing ocean color products. One of these sensors is the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, whose ocean color products are currently the most widely used of the three. A recent improvement to the MODIS calibration methodology has used land targets to improve the calibration accuracy. This study evaluates the new calibration methodology and describes further calibration improvements that are built upon the new methodology by including ocean measurements in the form of global temporally averaged water-leaving reflectance measurements. The calibration improvements presented here mainly modify the calibration at the scan edges, taking advantage of the good performance of the land target trending in the center of the scan.

  15. Hardware-based image processing for high-speed inspection of grains

    USDA-ARS?s Scientific Manuscript database

    A high-speed, low-cost, image-based sorting device was developed to detect and separate grains with slight color differences and small defects on grains The device directly combines a complementary metal–oxide–semiconductor (CMOS) color image sensor with a field-programmable gate array (FPGA) which...

  16. A novel weighted-direction color interpolation

    NASA Astrophysics Data System (ADS)

    Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng

    2013-08-01

    A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.

  17. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  18. An Illumination-Adaptive Colorimetric Measurement Using Color Image Sensor

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Lee, Jong-Hyub; Sohng, Kyu-Ik

    An image sensor for a use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between RGB output signals and CIE XYZ tristimulus values. This paper proposes an adaptive measuring method to obtain the chromaticity of colored scenes and illumination through a 3×3 camera transfer matrix under a certain illuminant. Camera RGB outputs, sensor status values, and photoelectric characteristic are used to obtain the chromaticity. Experimental results show that the proposed method is valid in the measuring performance.

  19. Imaging tristimulus colorimeter for the evaluation of color in printed textiles

    NASA Astrophysics Data System (ADS)

    Hunt, Martin A.; Goddard, James S., Jr.; Hylton, Kathy W.; Karnowski, Thomas P.; Richards, Roger K.; Simpson, Marc L.; Tobin, Kenneth W., Jr.; Treece, Dale A.

    1999-03-01

    The high-speed production of textiles with complicated printed patterns presents a difficult problem for a colorimetric measurement system. Accurate assessment of product quality requires a repeatable measurement using a standard color space, such as CIELAB, and the use of a perceptually based color difference formula, e.g. (Delta) ECMC color difference formula. Image based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. This research and development effort describes a benchtop, proof-of-principle system that implements a projection onto convex sets (POCS) algorithm for mapping component color measurements to standard tristimulus values and incorporates structural and color based segmentation for improved precision and accuracy. The POCS algorithm consists of determining the closed convex sets that describe the constraints on the reconstruction of the true tristimulus values based on the measured imperfect values. We show that using a simulated D65 standard illuminant, commercial filters and a CCD camera, accurate (under perceptibility limits) per-region based (Delta) ECMC values can be measured on real textile samples.

  20. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  1. Nanophotonic Image Sensors

    PubMed Central

    Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.

    2016-01-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941

  2. Thermal imaging of Al-CuO thermites

    NASA Astrophysics Data System (ADS)

    Densmore, John; Sullivan, Kyle; Kuntz, Joshua; Gash, Alex

    2013-06-01

    We have performed spatial in-situ temperature measurements of aluminum-copper oxide thermite reactions using high-speed color pyrometry. Electrophoretic deposition was used to create thermite microstructures. Tests were performed with micron- and nano-sized particles at different stoichiometries. The color pyrometry was performed using a high-speed color camera. The color filter array on the image sensor collects light within three spectral bands. Assuming a gray-body emission spectrum a multi-wavelength ratio analysis allows a temperature to be calculated. An advantage of using a two-dimensional image sensor is that it allows heterogeneous flames to be measured with high spatial resolution. Light from the initial combustion of the Al-CuO can be differentiated from the light created by the late time oxidization with atmosphere. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  3. Accommodating multiple illumination sources in an imaging colorimetry environment

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth W., Jr.; Goddard, James S., Jr.; Hunt, Martin A.; Hylton, Kathy W.; Karnowski, Thomas P.; Simpson, Marc L.; Richards, Roger K.; Treece, Dale A.

    2000-03-01

    Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.

  4. Bayer Demosaicking with Polynomial Interpolation.

    PubMed

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  5. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    DOEpatents

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  6. Temperature measurement with industrial color camera devices

    NASA Astrophysics Data System (ADS)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  7. Calibration Uncertainty in Ocean Color Satellite Sensors and Trends in Long-term Environmental Records

    NASA Technical Reports Server (NTRS)

    Turpie, Kevin R.; Eplee, Robert E., Jr.; Franz, Bryan A.; Del Castillo, Carlos

    2014-01-01

    Launched in late 2011, the Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (NPP) spacecraft is being evaluated by NASA to determine whether this sensor can continue the ocean color data record established through the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) and the MODerate resolution Imaging Spectroradiometer (MODIS). To this end, Goddard Space Flight Center generated evaluation ocean color data products using calibration techniques and algorithms established by NASA during the SeaWiFS and MODIS missions. The calibration trending was subjected to some initial sensitivity and uncertainty analyses. Here we present an introductory assessment of how the NASA-produced time series of ocean color is influenced by uncertainty in trending instrument response over time. The results help quantify the uncertainty in measuring regional and global biospheric trends in the ocean using satellite remote sensing, which better define the roles of such records in climate research.

  8. Wavelength- or Polarization-Selective Thermal Infrared Detectors for Multi-Color or Polarimetric Imaging Using Plasmonics and Metamaterials

    PubMed Central

    Ogawa, Shinpei; Kimata, Masafumi

    2017-01-01

    Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs—periodic crystals, metal-insulator-metal and mushroom-type PMAs—to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications. PMID:28772855

  9. Wavelength- or Polarization-Selective Thermal Infrared Detectors for Multi-Color or Polarimetric Imaging Using Plasmonics and Metamaterials.

    PubMed

    Ogawa, Shinpei; Kimata, Masafumi

    2017-05-04

    Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs-periodic crystals, metal-insulator-metal and mushroom-type PMAs-to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications.

  10. Nanophotonic Image Sensors.

    PubMed

    Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S

    2016-09-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Identifying Rhodamine Dye Plume Sources in Near-Shore Oceanic Environments by Integration of Chemical and Visual Sensors

    PubMed Central

    Tian, Yu; Kang, Xiaodong; Li, Yunyi; Li, Wei; Zhang, Aiqun; Yu, Jiangchen; Li, Yiping

    2013-01-01

    This article presents a strategy for identifying the source location of a chemical plume in near-shore oceanic environments where the plume is developed under the influence of turbulence, tides and waves. This strategy includes two modules: source declaration (or identification) and source verification embedded in a subsumption architecture. Algorithms for source identification are derived from the moth-inspired plume tracing strategies based on a chemical sensor. The in-water test missions, conducted in November 2002 at San Clemente Island (California, USA) in June 2003 in Duck (North Carolina, USA) and in October 2010 at Dalian Bay (China), successfully identified the source locations after autonomous underwater vehicles tracked the rhodamine dye plumes with a significant meander over 100 meters. The objective of the verification module is to verify the declared plume source using a visual sensor. Because images taken in near shore oceanic environments are very vague and colors in the images are not well-defined, we adopt a fuzzy color extractor to segment the color components and recognize the chemical plume and its source by measuring color similarity. The source verification module is tested by images taken during the CPT missions. PMID:23507823

  12. Process simulation in digital camera system

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  13. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods

    PubMed Central

    Hogervorst, Maarten A.; Pinkus, Alan R.

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance. PMID:28036328

  14. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    PubMed

    Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.

  15. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  16. Few-photon color imaging using energy-dispersive superconducting transition-edge sensor spectrometry

    NASA Astrophysics Data System (ADS)

    Niwa, Kazuki; Numata, Takayuki; Hattori, Kaori; Fukuda, Daiji

    2017-04-01

    Highly sensitive spectral imaging is increasingly being demanded in bioanalysis research and industry to obtain the maximum information possible from molecules of different colors. We introduce an application of the superconducting transition-edge sensor (TES) technique to highly sensitive spectral imaging. A TES is an energy-dispersive photodetector that can distinguish the wavelength of each incident photon. Its effective spectral range is from the visible to the infrared (IR), up to 2800 nm, which is beyond the capabilities of other photodetectors. TES was employed in this study in a fiber-coupled optical scanning microscopy system, and a test sample of a three-color ink pattern was observed. A red-green-blue (RGB) image and a near-IR image were successfully obtained in the few-incident-photon regime, whereas only a black and white image could be obtained using a photomultiplier tube. Spectral data were also obtained from a selected focal area out of the entire image. The results of this study show that TES is feasible for use as an energy-dispersive photon-counting detector in spectral imaging applications.

  17. Few-photon color imaging using energy-dispersive superconducting transition-edge sensor spectrometry.

    PubMed

    Niwa, Kazuki; Numata, Takayuki; Hattori, Kaori; Fukuda, Daiji

    2017-04-04

    Highly sensitive spectral imaging is increasingly being demanded in bioanalysis research and industry to obtain the maximum information possible from molecules of different colors. We introduce an application of the superconducting transition-edge sensor (TES) technique to highly sensitive spectral imaging. A TES is an energy-dispersive photodetector that can distinguish the wavelength of each incident photon. Its effective spectral range is from the visible to the infrared (IR), up to 2800 nm, which is beyond the capabilities of other photodetectors. TES was employed in this study in a fiber-coupled optical scanning microscopy system, and a test sample of a three-color ink pattern was observed. A red-green-blue (RGB) image and a near-IR image were successfully obtained in the few-incident-photon regime, whereas only a black and white image could be obtained using a photomultiplier tube. Spectral data were also obtained from a selected focal area out of the entire image. The results of this study show that TES is feasible for use as an energy-dispersive photon-counting detector in spectral imaging applications.

  18. Garden City, Kansas

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Center pivot irrigation systems create red circles of healthy vegetation in this image of croplands near Garden City, Kansas. This image was acquired by Landsat 7's Enhanced Thematic Mapper plus (ETM+) sensor on September 25, 2000. This is a false-color composite image made using near infrared, red, and green wavelengths. The image has also been sharpened using the sensor's panchromatic band. Image provided by the USGS EROS Data Center Satellite Systems Branch

  19. The HydroColor App: Above Water Measurements of Remote Sensing Reflectance and Turbidity Using a Smartphone Camera.

    PubMed

    Leeuw, Thomas; Boss, Emmanuel

    2018-01-16

    HydroColor is a mobile application that utilizes a smartphone's camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone's digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor's reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data.

  20. Effects of the source, surface, and sensor couplings and colorimetric of laser speckle pattern on the performance of optical imaging system

    NASA Astrophysics Data System (ADS)

    Darwiesh, M.; El-Sherif, Ashraf F.; El-Ghandour, Hatem; Aly, Hussein A.; Mokhtar, A. M.

    2011-03-01

    Optical imaging systems are widely used in different applications include tracking for portable scanners; input pointing devices for laptop computers, cell phones, and cameras, fingerprint-identification scanners, optical navigation for target tracking, and in optical computer mouse. We presented an experimental work to measure and analyze the laser speckle pattern (LSP) produced from different optical sources (i.e. various color LEDs, 3 mW diode laser, and 10mW He-Ne laser) with different produced operating surfaces (Gabor hologram diffusers), and how they affects the performance of the optical imaging systems; speckle size and signal-to-noise ratio (signal is represented by the patches of the speckles that contain or carry information, and noise is represented by the whole remaining part of the selected image). The theoretical and experimental studies of the colorimetry (color correction is done in the color images captured by the optical imaging system to produce realistic color images which contains most of the information in the image by selecting suitable gray scale which contains most of the informative data in the image, this is done by calculating the accurate Red-Green-Blue (RGB) color components making use of the measured spectrum for light sources, and color matching functions of International Telecommunication Organization (ITU-R709) for CRT phosphorus, Tirinton-SONY Model ) for the used optical sources are investigated and introduced to present the relations between the signal-to-noise ratios with different diffusers for each light source. The source surface coupling has been discussed and concludes that the performance of the optical imaging system for certain source varies from worst to best based on the operating surface. The sensor /surface coupling has been studied and discussed for the case of He-Ne laser and concludes the speckle size is ranged from 4.59 to 4.62 μm, which are slightly different or approximately the same for all produced diffusers (which satisfies the fact that the speckle size is independent on the illuminating surface). But, the calculated value of signal-tonoise ratio takes different values ranged from 0.71 to 0.92 for different diffuser. This means that the surface texture affects the performance of the optical sensor because, all images captured for all diffusers under the same conditions [same source (He-Ne laser), same distances of the experimental set-up, and the same sensor (CCD camera)].

  1. A litmus-type colorimetric and fluorometric volatile organic compound sensor based on inkjet-printed polydiacetylenes on paper substrates.

    PubMed

    Yoon, Bora; Park, In Sung; Shin, Hyora; Park, Hye Jin; Lee, Chan Woo; Kim, Jong-Man

    2013-05-14

    Inkjet-printed paper-based volatile organic compound (VOC) sensor strips imaged with polydiacetylenes (PDAs) are developed. A microemulsion ink containing bisurethane-substituted diacetylene (DA) monomers, 4BCMU, was inkjet printed onto paper using a conventional inkjet office printer. UV irradiation of the printed image allowed fabrication of blue-colored poly-4BCMU on the paper and the polymer was found to display colorimetric responses to VOCs. Interestingly, a blue-to-yellow color change was observed when the strip was exposed to chloroform vapor, which was accompanied by the generation of green fluorescence. The principal component analysis plot of the color and fluorescence images of the VOC-exposed polymers allowed a more precise discrimination of VOC vapors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. False-color display of special sensor microwave/imager (SSM/I) data

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.; Kummerow, Christian D.

    1989-01-01

    Displays of multifrequency passive microwave data from the Special Sensor Microwave/Imager (SSM/I) flying on the Defense Meteorological Satellite Program (DMSP) spacecraft are presented. Observed brightness temperatures at 85.5 GHz (vertical and horizontal polarizations) and 37 GHz (vertical polarization) are respectively used to 'drive' the red, green, and blue 'guns' of a color monitor. The resultant false-color images can be used to distinguish land from water, highlight precipitation processes and structure over both land and water, and detail variations in other surfaces such as deserts, snow cover, and sea ice. The observations at 85.5 GHz also add a previously unavailable frequency to the problem of rainfall estimation from space. Examples of mesoscale squall lines, tropical and extra-tropical storms, and larger-scale land and atmospheric features as 'viewed' by the SSM/I are shown.

  3. False-color display of special sensor microwave/imager (SSM/I) data

    NASA Astrophysics Data System (ADS)

    Negri, Andrew J.; Adler, Robert F.; Kummerow, Christian D.

    1989-02-01

    Displays of multifrequency passive microwave data from the Special Sensor Microwave/Imager (SSM/I) flying on the Defense Meteorological Satellite Program (DMSP) spacecraft are presented. Observed brightness temperatures at 85.5 GHz (vertical and horizontal polarizations) and 37 GHz (vertical polarization) are respectively used to 'drive' the red, green, and blue 'guns' of a color monitor. The resultant false-color images can be used to distinguish land from water, highlight precipitation processes and structure over both land and water, and detail variations in other surfaces such as deserts, snow cover, and sea ice. The observations at 85.5 GHz also add a previously unavailable frequency to the problem of rainfall estimation from space. Examples of mesoscale squall lines, tropical and extra-tropical storms, and larger-scale land and atmospheric features as 'viewed' by the SSM/I are shown.

  4. Noise reduction techniques for Bayer-matrix images

    NASA Astrophysics Data System (ADS)

    Kalevo, Ossi; Rantanen, Henry

    2002-04-01

    In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.

  5. A Decade of Satellite Ocean Color Observations

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.

    2009-01-01

    After the successful Coastal Zone Color Scanner (CZCS, 1978-1986), demonstration that quantitative estimations of geophysical variables such as chlorophyll a and diffuse attenuation coefficient could be derived from top of the atmosphere radiances, a number of international missions with ocean color capabilities were launched beginning in the late 1990s. Most notable were those with global data acquisition capabilities, i.e., the Ocean Color and Temperature Sensor (OCTS 1996-1997), the Sea-viewing Wide Field-of-view Sensor (SeaWiFS, United States, 1997-present), two Moderate Resolution Imaging Spectroradiometers, (MODIS, United States, Terra/2000-present and Aqua/2002-present), the Global Imager (GLI, Japan, 2002-2003), and the Medium Resolution Imaging Spectrometer (MERIS, European Space Agency, 2002-present). These missions have provided data of exceptional quality and continuity, allowing for scientific inquiries into a wide variety of marine research topics not possible with the CZCS. This review focuses on the scientific advances made over the past decade using these data sets.

  6. Autonomous chemical and biological miniature wireless-sensor

    NASA Astrophysics Data System (ADS)

    Goldberg, Bar-Giora

    2005-05-01

    The presentation discusses a new concept and a paradigm shift in biological, chemical and explosive sensor system design and deployment. From large, heavy, centralized and expensive systems to distributed wireless sensor networks utilizing miniature platforms (nodes) that are lightweight, low cost and wirelessly connected. These new systems are possible due to the emergence and convergence of new innovative radio, imaging, networking and sensor technologies. Miniature integrated radio-sensor networks, is a technology whose time has come. These network systems are based on large numbers of distributed low cost and short-range wireless platforms that sense and process their environment and communicate data thru a network to a command center. The recent emergence of chemical and explosive sensor technology based on silicon nanostructures, coupled with the fast evolution of low-cost CMOS imagers, low power DSP engines and integrated radio chips, has created an opportunity to realize the vision of autonomous wireless networks. These threat detection networks will perform sophisticated analysis at the sensor node and convey alarm information up the command chain. Sensor networks of this type are expected to revolutionize the ability to detect and locate biological, chemical, or explosive threats. The ability to distribute large numbers of low-cost sensors over large areas enables these devices to be close to the targeted threats and therefore improve detection efficiencies and enable rapid counter responses. These sensor networks will be used for homeland security, shipping container monitoring, and other applications such as laboratory medical analysis, drug discovery, automotive, environmental and/or in-vivo monitoring. Avaak"s system concept is to image a chromatic biological, chemical and/or explosive sensor utilizing a digital imager, analyze the images and distribute alarm or image data wirelessly through the network. All the imaging, processing and communications would take place within the miniature, low cost distributed sensor platforms. This concept however presents a significant challenge due to a combination and convergence of required new technologies, as mentioned above. Passive biological and chemical sensors with very high sensitivity and which require no assaying are in development using a technique to optically and chemically encode silicon wafers with tailored nanostructures. The silicon wafer is patterned with nano-structures designed to change colors ad patterns when exposed to the target analytes (TICs, TIMs, VOC). A small video camera detects the color and pattern changes on the sensor. To determine if an alarm condition is present, an on board DSP processor, using specialized image processing algorithms and statistical analysis, determines if color gradient changes occurred on the sensor array. These sensors can detect several agents simultaneously. This system is currently under development by Avaak, with funding from DARPA through an SBIR grant.

  7. Visible and infrared imaging radiometers for ocean observations

    NASA Technical Reports Server (NTRS)

    Barnes, W. L.

    1977-01-01

    The current status of visible and infrared sensors designed for the remote monitoring of the oceans is reviewed. Emphasis is placed on multichannel scanning radiometers that are either operational or under development. Present design practices and parameter constraints are discussed. Airborne sensor systems examined include the ocean color scanner and the ocean temperature scanner. The costal zone color scanner and advanced very high resolution radiometer are reviewed with emphasis on design specifications. Recent technological advances and their impact on sensor design are examined.

  8. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  9. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  10. The HydroColor App: Above Water Measurements of Remote Sensing Reflectance and Turbidity Using a Smartphone Camera

    PubMed Central

    Leeuw, Thomas; Boss, Emmanuel

    2018-01-01

    HydroColor is a mobile application that utilizes a smartphone’s camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone’s digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor’s reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data. PMID:29337917

  11. Fall's Changing Colors

    NASA Technical Reports Server (NTRS)

    2002-01-01

    As the clouds allowed during the past two months, the Sea-viewing Wide field-of-View Sensor (SeaWiFS) recorded the changing colors of eastern U.S. and Canadian vegetation. This series of true-color images from the fall of 2000 shows the deciduous forests of the region change from dark green to bright red and orange, and begin to drop their leaves. Image provided by the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE

  12. Estimating Advective Near-surface Currents from Ocean Color Satellite Images

    DTIC Science & Technology

    2015-01-01

    of surface current information. The present study uses the sequential ocean color products provided by the Geostationary Ocean Color Imager (GOCI) and...on the SuomiNational Polar-Orbiting Partner- ship (S-NPP) satellite. The GOCI is the world’s first geostationary orbit satellite sensor over the...used to extract the near-surface currents by the MCC algorithm. We not only demonstrate the retrieval of currents from the geostationary satellite ocean

  13. 76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-14

    ... detected by infrared sensors can be much different from that detected by natural pilot vision. On a dark... by many imaging infrared systems. On the other hand, contrasting colors in visual wavelengths may be... of the EFVS image and the level of EFVS infrared sensor performance could depend significantly on...

  14. The eyes of LITENING

    NASA Astrophysics Data System (ADS)

    Moser, Eric K.

    2016-05-01

    LITENING is an airborne system-of-systems providing long-range imaging, targeting, situational awareness, target tracking, weapon guidance, and damage assessment, incorporating a laser designator and laser range finders, as well as non-thermal and thermal imaging systems, with multi-sensor boresight. Robust operation is at a premium, and subsystems are partitioned to modular, swappable line-replaceable-units (LRUs) and shop-replaceable-units (SRUs). This presentation will explore design concepts for sensing, data storage, and presentation of imagery associated with the LITENING targeting pod. The "eyes" of LITENING are the electro-optic sensors. Since the initial LITENING II introduction to the US market in the late 90s, as the program has evolved and matured, a series of spiral functional improvements and sensor upgrades have been incorporated. These include laser-illuminated imaging, and more recently, color sensing. While aircraft displays are outside of the LITENING system, updates to the available viewing modules have also driven change, and resulted in increasingly effective ways of utilizing the targeting system. One of the latest LITENING spiral upgrades adds a new capability to display and capture visible-band color imagery, using new sensors. This is an augmentation to the system's existing capabilities, which operate over a growing set of visible and invisible colors, infrared bands, and laser line wavelengths. A COTS visible-band camera solution using a CMOS sensor has been adapted to meet the particular needs associated with the airborne targeting use case.

  15. Real-time DNA Amplification and Detection System Based on a CMOS Image Sensor.

    PubMed

    Wang, Tiantian; Devadhasan, Jasmine Pramila; Lee, Do Young; Kim, Sanghyo

    2016-01-01

    In the present study, we developed a polypropylene well-integrated complementary metal oxide semiconductor (CMOS) platform to perform the loop mediated isothermal amplification (LAMP) technique for real-time DNA amplification and detection simultaneously. An amplification-coupled detection system directly measures the photon number changes based on the generation of magnesium pyrophosphate and color changes. The photon number decreases during the amplification process. The CMOS image sensor observes the photons and converts into digital units with the aid of an analog-to-digital converter (ADC). In addition, UV-spectral studies, optical color intensity detection, pH analysis, and electrophoresis detection were carried out to prove the efficiency of the CMOS sensor based the LAMP system. Moreover, Clostridium perfringens was utilized as proof-of-concept detection for the new system. We anticipate that this CMOS image sensor-based LAMP method will enable the creation of cost-effective, label-free, optical, real-time and portable molecular diagnostic devices.

  16. Simple Colorimetric Sensor for Trinitrotoluene Testing

    NASA Astrophysics Data System (ADS)

    Samanman, S.; Masoh, N.; Salah, Y.; Srisawat, S.; Wattanayon, R.; Wangsirikul, P.; Phumivanichakit, K.

    2017-02-01

    A simple operating colorimetric sensor for trinitrotoluene (TNT) determination using a commercial scanner as a captured image was designed. The sensor is based on the chemical reaction between TNT and sodium hydroxide reagent to produce the color change within 96 well plates, which observed finally, recorded using a commercial scanner. The intensity of the color change increased with increase in TNT concentration and could easily quantify the concentration of TNT by digital image analysis using the Image J free software. Under optimum conditions, the sensor provided a linear dynamic range between 0.20 and 1.00 mg mL-1(r = 0.9921) with a limit of detection of 0.10± 0.01 mg mL-1. The relative standard deviation for eight experiments for the sensitivity was 3.8%. When applied for the analysis of TNT in two soil extract samples, the concentrations were found to be non-detectable to 0.26±0.04 mg mL-1. The obtained recovery values (93-95%) were acceptable for soil samples tested.

  17. Munsell color analysis of Landsat color-ratio-composite images of limonitic areas in southwest New Mexico

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1985-01-01

    The causes of color variations in the green areas on Landsat 4/5-4/6-6/7 (red-blue-green) color-ratio-composite (CRC) images, defined as limonitic areas, were investigated by analyzing the CRC images of the Lordsburg, New Mexico area. The red-blue-green additive color system was mathematically transformed into the cylindrical Munsell color coordinates (hue, saturation, and value), and selected areas were digitally analyzed for color variation. The obtained precise color characteristics were then correlated with properties of surface material. The amount of limonite (L) visible to the sensor was found to be the primary cause of the observed color differences. The visible L is, is turn, affected by the amount of L on the material's surface and by within-pixel mixing of limonitic and nonlimonitic materials. The secondary cause of variation was vegetation density, which shifted CRC hues towards yellow-green, decreased saturation, and increased value.

  18. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  19. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  20. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera

    NASA Astrophysics Data System (ADS)

    Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin

    2014-12-01

    The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.

  1. Satellite Ocean Biology: Past, Present, Future

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.

    2012-01-01

    Since 1978 when the first satellite ocean color proof-of-concept sensor, the Nimbus-7 Coastal Zone Color Scanner, was launched, much progress has been made in refining the basic measurement concept and expanding the research applications of global satellite time series of biological and optical properties such as chlorophyll-a concentrations. The seminar will review the fundamentals of satellite ocean color measurements (sensor design considerations, on-orbit calibration, atmospheric corrections, and bio-optical algorithms), scientific results from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Moderate resolution Imaging Spectroradiometer (MODIS) missions, and the goals of future NASA missions such as PACE, the Aerosol, Cloud, Ecology (ACE), and Geostationary Coastal and Air Pollution Events (GeoCAPE) missions.

  2. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  3. MTF evaluation of white pixel sensors

    NASA Astrophysics Data System (ADS)

    Lindner, Albrecht; Atanassov, Kalin; Luo, Jiafu; Goma, Sergio

    2015-01-01

    We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative layouts containing white pixels. We focused on the sensors' resolving powers, which we measured in the form of a modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart, the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in the chroma channels when compared to the traditional Bayer sensor.

  4. Colorizing SENTINEL-1 SAR Images Using a Variational Autoencoder Conditioned on SENTINEL-2 Imagery

    NASA Astrophysics Data System (ADS)

    Schmitt, M.; Hughes, L. H.; Körner, M.; Zhu, X. X.

    2018-05-01

    In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.

  5. Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian

    2015-07-01

    We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.

  6. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  7. Sierra Madre Oriental in Coahuila, Mexico

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This desolate landscape is part of the Sierra Madre Oriental mountain range, on the border between the Coahuila and Nuevo Leon provinces of Mexico. This image was acquired by Landsat 7's Enhanced Thematic Mapper plus (ETM+) sensor on November 28, 1999. This is a false-color composite image made using shortwave infrared, infrared, and green wavelengths. The image has also been sharpened using the sensor's panchromatic band. Image provided by the USGS EROS Data Center Satellite Systems Branch

  8. Along-Track Reef Imaging System (ATRIS)

    USGS Publications Warehouse

    Brock, John; Zawada, Dave

    2006-01-01

    "Along-Track Reef Imaging System (ATRIS)" describes the U.S. Geological Survey's Along-Track Reef Imaging System, a boat-based sensor package for rapidly mapping shallow water benthic environments. ATRIS acquires high resolution, color digital images that are accurately geo-located in real-time.

  9. An Overview of SIMBIOS Program Activities and Accomplishments. Chapter 1

    NASA Technical Reports Server (NTRS)

    Fargion, Giulietta S.; McClain, Charles R.

    2003-01-01

    The SIMBIOS Program was conceived in 1994 as a result of a NASA management review of the agency's strategy for monitoring the bio-optical properties of the global ocean through space-based ocean color remote sensing. At that time, the NASA ocean color flight manifest included two data buy missions, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Earth Observing System (EOS) Color, and three sensors, two Moderate Resolution Imaging Spectroradiometers (MODIS) and the Multi-angle Imaging Spectro-Radiometer (MISR), scheduled for flight on the EOS-Terra and EOS-Aqua satellites. The review led to a decision that the international assemblage of ocean color satellite systems provided ample redundancy to assure continuous global coverage, with no need for the EOS Color mission. At the same time, it was noted that non-trivial technical difficulties attended the challenge (and opportunity) of combining ocean color data from this array of independent satellite systems to form consistent and accurate global bio-optical time series products. Thus, it was announced at the October 1994 EOS Interdisciplinary Working Group meeting that some of the resources budgeted for EOS Color should be redirected into an intercalibration and validation program (McClain et al., 2002).

  10. Color multiplexing method to capture front and side images with a capsule endoscope.

    PubMed

    Tseng, Yung-Chieh; Hsu, Hsun-Ching; Han, Pin; Tsai, Cheng-Mu

    2015-10-01

    This paper proposes a capsule endoscope (CE), based on color multiplexing, to simultaneously record front and side images. Only one lens associated with an X-cube prism is employed to catch the front and side view profiles in the CE. Three color filters and polarizers are placed on three sides of an X-cube prism. When objects locate at one of the X-cube's three sides, front and side view profiles of different colors will be caught through the proposed lens and recorded at the color image sensor. The proposed color multiplexing CE (CMCE) is designed with a field of view of up to 210 deg and a 180 lp/mm resolution under f-number 2.8 and overall length 13.323 mm. A ray-tracing simulation in the CMCE with the color multiplexing mechanism verifies that the CMCE not only records the front and side view profiles at the same time, but also has great image quality at a small size.

  11. Spatial super-resolution of colored images by micro mirrors

    NASA Astrophysics Data System (ADS)

    Dahan, Daniel; Yaacobi, Ami; Pinsky, Ephraim; Zalevsky, Zeev

    2018-06-01

    In this paper, we present two methods of dealing with the geometric resolution limit of color imaging sensors. It is possible to overcome the pixel size limit by adding a digital micro-mirror device component on the intermediate image plane of an optical system, and adapting its pattern in a computerized manner before sampling each frame. The full RGB image can be reconstructed from the Bayer camera by building a dedicated optical design, or by adjusting the demosaicing process to the special format of the enhanced image.

  12. Pixel super resolution using wavelength scanning

    DTIC Science & Technology

    2016-04-08

    the light source is adjusted to ~20 μW. The image sensor chip is a color CMOS sensor chip with a pixel size of 1.12 μm manufactured for cellphone...pitch (that is, ~ 1 μm in Figure 3a, using a CMOS sensor that has a 1.12-μm pixel pitch). For the same configuration depicted in Figure 3, utilizing...section). The a Lens-free raw holograms captured by 1.12 μm CMOS image sensor Field of view ≈ 20.5 mm2 Angle change directions for synthetic aperture

  13. The Use of False Color Landsat Imagery with a Fifth Grade Class.

    ERIC Educational Resources Information Center

    Harnapp, Vern R.

    Fifth grade students can become familiar with images of earth generated by space sensor Landsat satellites which sense nearly all surfaces of the earth once every 18 days. Two false color composites in which different colors represent various geographic formations were obtained for the northern Ohio region where the students live. The class had no…

  14. The Landsat Image Mosaic of Antarctica

    USGS Publications Warehouse

    Bindschadler, Robert; Vornberger, P.; Fleming, A.; Fox, A.; Mullins, J.; Binnie, D.; Paulsen, S.J.; Granneman, Brian J.; Gorodetzky, D.

    2008-01-01

    The Landsat Image Mosaic of Antarctica (LIMA) is the first true-color, high-spatial-resolution image of the seventh continent. It is constructed from nearly 1100 individually selected Landsat-7 ETM+ scenes. Each image was orthorectified and adjusted for geometric, sensor and illumination variations to a standardized, almost seamless surface reflectance product. Mosaicing to avoid clouds produced a high quality, nearly cloud-free benchmark data set of Antarctica for the International Polar Year from images collected primarily during 1999-2003. Multiple color composites and enhancements were generated to illustrate additional characteristics of the multispectral data including: the true appearance of the surface; discrimination between snow and bare ice; reflectance variations within bright snow; recovered reflectance values in regions of sensor saturation; and subtle topographic variations associated with ice flow. LIMA is viewable and individual scenes or user defined portions of the mosaic are downloadable at http://lima.usgs.gov. Educational materials associated with LIMA are available at http://lima.nasa.gov.

  15. Investigation of Terrain Analysis and Classification Methods for Ground Vehicles

    DTIC Science & Technology

    2012-08-27

    exteroceptive terrain classifier takes exteroceptive sensor data (here, color stereo images of the terrain) as its input and returns terrain class...Mishkin & Laubach, 2006), the rover cannot safely travel beyond the distance it can image with its cameras, which has been as little as 15 meters or...field of view roughly 44°×30°, capturing pairs of color images at 640×480 pixels each (Videre Design, 2001). Range data were extracted from the stereo

  16. An imaging colorimeter for noncontact tissue color mapping.

    PubMed

    Balas, C

    1997-06-01

    There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.

  17. Evaluation of VIIRS ocean color products

    NASA Astrophysics Data System (ADS)

    Wang, Menghua; Liu, Xiaoming; Jiang, Lide; Son, SeungHyun; Sun, Junqiang; Shi, Wei; Tan, Liqin; Naik, Puneeta; Mikelsons, Karlis; Wang, Xiaolong; Lance, Veronica

    2014-11-01

    The Suomi National Polar-orbiting Partnership (SNPP) was successfully launched on October 28, 2011. The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi NPP, which has 22 spectral bands (from visible to infrared) similar to the NASA's Moderate Resolution Imaging Spectroradiometer (MODIS), is a multi-disciplinary sensor providing observations for the Earth's atmosphere, land, and ocean properties. In this paper, we provide some evaluations and assessments of VIIRS ocean color data products, or ocean color Environmental Data Records (EDR), including normalized water-leaving radiance spectra nLw(λ) at VIIRS five spectral bands, chlorophyll-a (Chl-a) concentration, and water diffuse attenuation coefficient at the wavelength of 490 nm Kd(490). Specifically, VIIRS ocean color products derived from the NOAA Multi-Sensor Level-1 to Level-2 (NOAA-MSL12) ocean color data processing system are evaluated and compared with MODIS ocean color products and in situ measurements. MSL12 is now NOAA's official ocean color data processing system for VIIRS. In addition, VIIRS Sensor Data Records (SDR or Level- 1B data) have been evaluated. In particular, VIIRS SDR and ocean color EDR have been compared with a series of in situ data from the Marine Optical Buoy (MOBY) in the waters off Hawaii. A notable discrepancy of global deep water Chl-a derived from MODIS and VIIRS between 2012 and 2013 is observed. This discrepancy is attributed to the SDR (or Level-1B data) calibration issue and particularly related to VIIRS green band at 551 nm. To resolve this calibration issue, we have worked on our own sensor calibration by combining the lunar calibration effect into the current calibration method. The ocean color products derived from our new calibrated SDR in the South Pacific Gyre show that the Chl-a differences between 2012 and 2013 are significantly reduced. Although there are still some issues, our results show that VIIRS is capable of providing high-quality global ocean color products in support of science research and operational applications. The VIIRS evaluation and monitoring results can be found at the website: http://www.star.nesdis.noaa.gov/sod/mecb/color/index.html.

  18. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  19. A 3D image sensor with adaptable charge subtraction scheme for background light suppression

    NASA Astrophysics Data System (ADS)

    Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.

    2013-02-01

    We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.

  20. Lake Carnegie, Western Australia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Ephemeral Lake Carnegie, in Western Australia, fills with water only during periods of significant rainfall. In dry years, it is reduced to a muddy marsh. This image was acquired by Landsat 7's Enhanced Thematic Mapper plus (ETM+) sensor on May 19, 1999. This is a false-color composite image made using shortwave infrared, infrared, and red wavelengths. The image has also been sharpened using the sensor's panchromatic band. Image provided by the USGS EROS Data Center Satellite Systems Branch. This image is part of the ongoing Landsat Earth as Art series.

  1. Commercial Sensory Survey Radiation Testing Progress Report

    NASA Technical Reports Server (NTRS)

    Becker, Heidi N.; Dolphic, Michael D.; Thorbourn, Dennis O.; Alexander, James W.; Salomon, Phil M.

    2008-01-01

    The NASA Electronic Parts and Packaging (NEPP) Program Sensor Technology Commercial Sensor Survey task is geared toward benefiting future NASA space missions with low-cost, short-duty-cycle, visible imaging needs. Such applications could include imaging for educational outreach purposes or short surveys of spacecraft, planetary, or lunar surfaces. Under the task, inexpensive commercial grade CMOS sensors were surveyed in fiscal year 2007 (FY07) and three sensors were selected for total ionizing dose (TID) and displacement damage dose (DDD) tolerance testing. The selected sensors had to meet selection criteria chosen to support small, low-mass cameras that produce good resolution color images. These criteria are discussed in detail in [1]. This document discusses the progress of radiation testing on the Micron and OmniVision sensors selected in FY07 for radiation tolerance testing.

  2. Techniques for using diazo materials in remote sensor data analysis

    NASA Technical Reports Server (NTRS)

    Whitebay, L. E.; Mount, S.

    1978-01-01

    The use of data derived from LANDSAT is facilitated when special products or computer enhanced images can be analyzed. However, the facilities required to produce and analyze such products prevent many users from taking full advantages of the LANDSAT data. A simple, low-cost method is presented by which users can make their own specially enhanced composite images from the four band black and white LANDSAT images by using the diazo process. The diazo process is described and a detailed procedure for making various color composites, such as color infrared, false natural color, and false color, is provided. The advantages and limitations of the diazo process are discussed. A brief discussion interpretation of diazo composites for land use mapping with some typical examples is included.

  3. Tri-linear color multi-linescan sensor with 200 kHz line rate

    NASA Astrophysics Data System (ADS)

    Schrey, Olaf; Brockherde, Werner; Nitta, Christian; Bechen, Benjamin; Bodenstorfer, Ernst; Brodersen, Jörg; Mayer, Konrad J.

    2016-11-01

    In this paper we present a newly developed linear CMOS high-speed line-scanning sensor realized in a 0.35 μm CMOS OPTO process for line-scan with 200 kHz true RGB and 600 kHz monochrome line rate, respectively. In total, 60 lines are integrated in the sensor allowing for electronic position adjustment. The lines are read out in rolling shutter manner. The high readout speed is achieved by a column-wise organization of the readout chain. At full speed, the sensor provides RGB color images with a spatial resolution down to 50 μm. This feature enables a variety of applications like quality assurance in print inspection, real-time surveillance of railroad tracks, in-line monitoring in flat panel fabrication lines and many more. The sensor has a fill-factor close to 100%, preventing aliasing and color artefacts. Hence the tri-linear technology is robust against aliasing ensuring better inspection quality and thus less waste in production lines.

  4. Merging Ocean Color Data From Multiple Missions. Chapter 6

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.

    2003-01-01

    Oceanic phytoplankton may play an important role in the cycling of carbon on the Earth, through the uptake of carbon dioxide in the process of photosynthesis. Although they are ubiquitous in the global oceans, their abundances and dynamics are difficult to estimate, primarily due to the vast spatial extent of the oceans and the short time scales over which their abundances can change. Consequently, the effects of oceanic phytoplankton on biogeochemical cycling, climate change, and fisheries are not well known. In response to the potential importance of phytoplankton in the global carbon cycle and the lack of comprehensive data, NASA and the international community have established high priority satellite missions designed to acquire and produce high quality ocean color data (Table 6.1). Ten of the missions are routine global observational missions: the Ocean Color and Temperature Sensor (OCTS), the Polarization and Directionality of the Earth's Reflectances sensor (POLDER), Sea-viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectrometer-AM (MODIS-AM), Medium Resolution Imaging Spectrometer (MERIS), Global Imager (GLI), MODIS-PM, Super-GLI (S-GLI), and the Visible/Infrared Imager and Radiometer Suite (VIIRS) on the NPOESS Preparatory Project (NPP) and the National Polar-orbiting Operational Environmental Satellite System (NPOESS). In addition, there are several other missions capable of providing ocean color data on smaller scales. Most of these missions contain the spectral band complement considered necessary to derive oceanic chlorophyll concentrations and other related parameters. Many contain additional bands that can provide important ancillary information about the optical and biological state of the oceans.

  5. Classification of Hyperspectral or Trichromatic Measurements of Ocean Color Data into Spectral Classes.

    PubMed

    Prasad, Dilip K; Agarwal, Krishna

    2016-03-22

    We propose a method for classifying radiometric oceanic color data measured by hyperspectral satellite sensors into known spectral classes, irrespective of the downwelling irradiance of the particular day, i.e., the illumination conditions. The focus is not on retrieving the inherent optical properties but to classify the pixels according to the known spectral classes of the reflectances from the ocean. The method compensates for the unknown downwelling irradiance by white balancing the radiometric data at the ocean pixels using the radiometric data of bright pixels (typically from clouds). The white-balanced data is compared with the entries in a pre-calibrated lookup table in which each entry represents the spectral properties of one class. The proposed approach is tested on two datasets of in situ measurements and 26 different daylight illumination spectra for medium resolution imaging spectrometer (MERIS), moderate-resolution imaging spectroradiometer (MODIS), sea-viewing wide field-of-view sensor (SeaWiFS), coastal zone color scanner (CZCS), ocean and land colour instrument (OLCI), and visible infrared imaging radiometer suite (VIIRS) sensors. Results are also shown for CIMEL's SeaPRISM sun photometer sensor used on-board field trips. Accuracy of more than 92% is observed on the validation dataset and more than 86% is observed on the other dataset for all satellite sensors. The potential of applying the algorithms to non-satellite and non-multi-spectral sensors mountable on airborne systems is demonstrated by showing classification results for two consumer cameras. Classification on actual MERIS data is also shown. Additional results comparing the spectra of remote sensing reflectance with level 2 MERIS data and chlorophyll concentration estimates of the data are included.

  6. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Treesearch

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  7. New feature of the neutron color image intensifier

    NASA Astrophysics Data System (ADS)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke

    2009-06-01

    We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2O 2S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2O 2S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5×10 8 n/cm 2/s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.

  8. Automatic parquet block sorting using real-time spectral classification

    NASA Astrophysics Data System (ADS)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  9. Color constancy by characterization of illumination chromaticity

    NASA Astrophysics Data System (ADS)

    Nikkanen, Jarno T.

    2011-05-01

    Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.

  10. Spatio-spectral color filter array design for optimal image recovery.

    PubMed

    Hirakawa, Keigo; Wolfe, Patrick J

    2008-10-01

    In digital imaging applications, data are typically obtained via a spatial subsampling procedure implemented as a color filter array-a physical construction whereby only a single color value is measured at each pixel location. Owing to the growing ubiquity of color imaging and display devices, much recent work has focused on the implications of such arrays for subsequent digital processing, including in particular the canonical demosaicking task of reconstructing a full color image from spatially subsampled and incomplete color data acquired under a particular choice of array pattern. In contrast to the majority of the demosaicking literature, we consider here the problem of color filter array design and its implications for spatial reconstruction quality. We pose this problem formally as one of simultaneously maximizing the spectral radii of luminance and chrominance channels subject to perfect reconstruction, and-after proving sub-optimality of a wide class of existing array patterns-provide a constructive method for its solution that yields robust, new panchromatic designs implementable as subtractive colors. Empirical evaluations on multiple color image test sets support our theoretical results, and indicate the potential of these patterns to increase spatial resolution for fixed sensor size, and to contribute to improved reconstruction fidelity as well as significantly reduced hardware complexity.

  11. Phase aided 3D imaging and modeling: dedicated systems and case studies

    NASA Astrophysics Data System (ADS)

    Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

    2014-05-01

    Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

  12. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  13. Stray-Light Correction of the Marine Optical Buoy

    NASA Technical Reports Server (NTRS)

    Brown, Steven W.; Johnson, B. Carol; Flora, Stephanie J.; Feinholz, Michael E.; Yarbrough, Mark A.; Barnes, Robert A.; Kim, Yong Sung; Lykke, Keith R.; Clark, Dennis K.

    2003-01-01

    In ocean-color remote sensing, approximately 90% of the flux at the sensor originates from atmospheric scattering, with the water-leaving radiance contributing the remaining 10% of the total flux. Consequently, errors in the measured top-of-the atmosphere radiance are magnified a factor of 10 in the determination of water-leaving radiance. Proper characterization of the atmosphere is thus a critical part of the analysis of ocean-color remote sensing data. It has always been necessary to calibrate the ocean-color satellite sensor vicariously, using in situ, ground-based results, independent of the status of the pre-flight radiometric calibration or the utility of on-board calibration strategies. Because the atmosphere contributes significantly to the measured flux at the instrument sensor, both the instrument and the atmospheric correction algorithm are simultaneously calibrated vicariously. The Marine Optical Buoy (MOBY), deployed in support of the Earth Observing System (EOS) since 1996, serves as the primary calibration station for a variety of ocean-color satellite instruments, including the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Japanese Ocean Color Temperature Scanner (OCTS) , and the French Polarization and Directionality of the Earth's Reflectances (POLDER). MOBY is located off the coast of Lanai, Hawaii. The site was selected to simplify the application of the atmospheric correction algorithms. Vicarious calibration using MOBY data allows for a thorough comparison and merger of ocean-color data from these multiple sensors.

  14. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  15. Merging Ocean Color Data from Multiple Missions. Chapter 12

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.

    2001-01-01

    Oceanic phytoplankton may play an important role in the cycling of carbon on the Earth, through the uptake of carbon dioxide in the process of photosynthesis. Although they are ubiquitous in the global oceans, their abundances and dynamics are difficult to estimate, primarily due to the vast spatial extent of the oceans and the short time scales over which their abundances can change. Consequently, the effects of oceanic phytoplankton on biogeochemical cycling, climate change, and fisheries are not well known. In response to the potential importance of phytoplankton in the global carbon cycle and the lack of comprehensive data, the National Aeronautics and Space Administration (NASA) and the international community have established high priority satellite missions designed to acquire and produce high quality ocean color data. Seven of the missions are routine global observational missions: the Ocean Color and Temperature Sensor (OCTS), the Polarization and Directionality of the Earth's Reflectances sensor (POLDER), Sea-viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectrometer-AM (MODIS-AM), Medium Resolution Imaging Spectrometer (MERIS), Global Imager (GLI), and MODIS-PM. In addition, there are several other missions capable of providing ocean color data on smaller scales. Most of these missions contain the spectral band complement considered necessary to derive oceanic pigment concentrations (i.e., phytoplankton abundance) and other related parameters. Many contain additional bands that can provide important ancillary information about the optical and biological state of the oceans. Any individual ocean color mission is limited in ocean coverage due to sun glint and clouds. For example, one of the first proposed missions, the SeaWiFS, can provide about 45% coverage of the global ocean in four days and only about 15% in one day.

  16. Measurement of beam profiles by terahertz sensor card with cholesteric liquid crystals.

    PubMed

    Tadokoro, Yuzuru; Nishikawa, Tomohiro; Kang, Boyoung; Takano, Keisuke; Hangyo, Masanori; Nakajima, Makoto

    2015-10-01

    We demonstrate a sensor card with cholesteric liquid crystals (CLCs) for terahertz (THz) waves generated from a nonlinear crystal pumped by a table-top laser. A beam profile of the THz waves is successfully visualized as color change by the sensor card without additional electronic devices, power supplies, and connecting cables. Above the power density of 4.3  mW/cm2, the approximate beam diameter of the THz waves is measured using the hue image that is digitalized from the picture of the sensor card. The sensor card is low in cost, portable, and suitable for various situations such as THz imaging and alignment of THz systems.

  17. An ultra-small, multi-point, and multi-color photo-detection system with high sensitivity and high dynamic range.

    PubMed

    Anazawa, Takashi; Yamazaki, Motohiro

    2017-12-05

    Although multi-point, multi-color fluorescence-detection systems are widely used in various sciences, they would find wider applications if they are miniaturized. Accordingly, an ultra-small, four-emission-point and four-color fluorescence-detection system was developed. Its size (space between emission points and a detection plane) is 15 × 10 × 12 mm, which is three-orders-of-magnitude smaller than that of a conventional system. Fluorescence from four emission points with an interval of 1 mm on the same plane was respectively collimated by four lenses and split into four color fluxes by four dichroic mirrors. Then, a total of sixteen parallel color fluxes were directly input into an image sensor and simultaneously detected. The emission-point plane and the detection plane (the image-sensor surface) were parallel and separated by a distance of only 12 mm. The developed system was applied to four-capillary array electrophoresis and successfully achieved Sanger DNA sequencing. Moreover, compared with a conventional system, the developed system had equivalent high fluorescence-detection sensitivity (lower detection limit of 17 pM dROX) and 1.6-orders-of-magnitude higher dynamic range (4.3 orders of magnitude).

  18. Optically based technique for producing merged spectra of water-leaving radiances from ocean color remote sensing.

    PubMed

    Mélin, Frédéric; Zibordi, Giuseppe

    2007-06-20

    An optically based technique is presented that produces merged spectra of normalized water-leaving radiances L(WN) by combining spectral data provided by independent satellite ocean color missions. The assessment of the merging technique is based on a four-year field data series collected by an autonomous above-water radiometer located on the Acqua Alta Oceanographic Tower in the Adriatic Sea. The uncertainties associated with the merged L(WN) obtained from the Sea-viewing Wide Field-of-view Sensor and the Moderate Resolution Imaging Spectroradiometer are consistent with the validation statistics of the individual sensor products. The merging including the third mission Medium Resolution Imaging Spectrometer is also addressed for a reduced ensemble of matchups.

  19. Comment on 'Aerosol and Rayleigh radiance contributions to Coastal Zone Colour Scanner images' by Eckstein and Simpson

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.; Evans, R. H.

    1993-01-01

    In a recent paper Eckstein and Simpson describe what they believe to be serious difficulties and/or errors with the CZCS (Coastal Zone Color Scanner) processing algorithms based on their analysis of seven images. Here we point out that portions of their analysis, particularly those dealing with multiple scattered Rayleigh radiance, are incorrect. We also argue that other problems they discuss have already been addressed in the literature. Finally, we suggest that many apparent artifacts in CZCS-derived pigment fields are likely to be due to inadequacies in the sensor band set or to poor radiometric stability, both of which will be remedied with the next generation of ocean color sensors.

  20. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  1. Airborne Remote Sensing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA imaging technology has provided the basis for a commercial agricultural reconnaissance service. AG-RECON furnishes information from airborne sensors, aerial photographs and satellite and ground databases to farmers, foresters, geologists, etc. This service produces color "maps" of Earth conditions, which enable clients to detect crop color changes or temperature changes that may indicate fire damage or pest stress problems.

  2. Estimation of saturated pixel values in digital color imaging

    PubMed Central

    Zhang, Xuemei; Brainard, David H.

    2007-01-01

    Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065

  3. Self-calibration for lensless color microscopy.

    PubMed

    Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry

    2017-05-01

    Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.

  4. Determination of Primary Spectral Bands for Remote Sensing of Aquatic Environments.

    PubMed

    Lee, ZhongPing; Carder, Kendall; Arnone, Robert; He, MingXia

    2007-12-20

    About 30 years ago, NASA launched the first ocean-color observing satellite:the Coastal Zone Color Scanner. CZCS had 5 bands in the visible-infrared domain with anobjective to detect changes of phytoplankton (measured by concentration of chlorophyll) inthe oceans. Twenty years later, for the same objective but with advanced technology, theSea-viewing Wide Field-of-view Sensor (SeaWiFS, 7 bands), the Moderate-ResolutionImaging Spectrometer (MODIS, 8 bands), and the Medium Resolution ImagingSpectrometer (MERIS, 12 bands) were launched. The selection of the number of bands andtheir positions was based on experimental and theoretical results achieved before thedesign of these satellite sensors. Recently, Lee and Carder (2002) demonstrated that foradequate derivation of major properties (phytoplankton biomass, colored dissolved organicmatter, suspended sediments, and bottom properties) in both oceanic and coastalenvironments from observation of water color, it is better for a sensor to have ~15 bands inthe 400 - 800 nm range. In that study, however, it did not provide detailed analysesregarding the spectral locations of the 15 bands. Here, from nearly 400 hyperspectral (~ 3-nm resolution) measurements of remote-sensing reflectance (a measure of water color)taken in both coastal and oceanic waters covering both optically deep and optically shallowwaters, first- and second-order derivatives were calculated after interpolating themeasurements to 1-nm resolution. From these derivatives, the frequency of zero values foreach wavelength was accounted for, and the distribution spectrum of such frequencies wasobtained. Furthermore, the wavelengths that have the highest appearance of zeros wereidentified. Because these spectral locations indicate extrema (a local maximum orminimum) of the reflectance spectrum or inflections of the spectral curvature, placing the bands of a sensor at these wavelengths maximizes the potential of capturing (and then restoring) the spectral curve, and thus maximizes the potential of accurately deriving properties of the water column and/or bottom of various aquatic environments with a multi-band sensor.

  5. Webcam classification using simple features

    NASA Astrophysics Data System (ADS)

    Pramoun, Thitiporn; Choe, Jeehyun; Li, He; Chen, Qingshuang; Amornraksa, Thumrongrat; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Thousands of sensors are connected to the Internet and many of these sensors are cameras. The "Internet of Things" will contain many "things" that are image sensors. This vast network of distributed cameras (i.e. web cams) will continue to exponentially grow. In this paper we examine simple methods to classify an image from a web cam as "indoor/outdoor" and having "people/no people" based on simple features. We use four types of image features to classify an image as indoor/outdoor: color, edge, line, and text. To classify an image as having people/no people we use HOG and texture features. The features are weighted based on their significance and combined. A support vector machine is used for classification. Our system with feature weighting and feature combination yields 95.5% accuracy.

  6. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  7. Sensor fusion of range and reflectance data for outdoor scene analysis

    NASA Technical Reports Server (NTRS)

    Kweon, In SO; Hebvert, Martial; Kanade, Takeo

    1988-01-01

    In recognizing objects in an outdoor scene, range and reflectance (or color) data provide complementary information. Results of experiments in recognizing outdoor scenes containing roads, trees, and cars are presented. The recognition program uses range and reflectance data obtained by a scanning laser range finder, as well as color data from a color TV camera. After segmentation of each image into primitive regions, models of objects are matched using various properties.

  8. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  9. Color regeneration from reflective color sensor using an artificial intelligent technique.

    PubMed

    Saracoglu, Ömer Galip; Altural, Hayriye

    2010-01-01

    A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages.

  10. High-End CMOS Active Pixel Sensors For Space-Borne Imaging Instruments

    DTIC Science & Technology

    2005-07-13

    DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM001791, Potentially Disruptive ... Technologies and Their Impact in Space Programs Held in Marseille, France on 4-6 July 2005. , The original document contains color images. 14

  11. Improving color constancy by discounting the variation of camera spectral sensitivity

    NASA Astrophysics Data System (ADS)

    Gao, Shao-Bing; Zhang, Ming; Li, Chao-Yi; Li, Yong-Jie

    2017-08-01

    It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color biased images under CSS-2, without the need of burdensome acquiring of training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.

  12. A Quality Sorting of Fruit Using a New Automatic Image Processing Method

    NASA Astrophysics Data System (ADS)

    Amenomori, Michihiro; Yokomizu, Nobuyuki

    This paper presents an innovative approach for quality sorting of objects such as apples sorting in an agricultural factory, using an image processing algorithm. The objective of our approach are; firstly to sort the objects by their colors precisely; secondly to detect any irregularity of the colors surrounding the apples efficiently. An experiment has been conducted and the results have been obtained and compared with that has been preformed by human sorting process and by color sensor sorting devices. The results demonstrate that our approach is capable to sort the objects rapidly and the percentage of classification valid rate was 100 %.

  13. Global Ocean Phytoplankton

    NASA Technical Reports Server (NTRS)

    Franz, B. A.; Behrenfeld, M. J.; Siegel, D. A.; Werdell, P. J.

    2014-01-01

    Marine phytoplankton are responsible for roughly half the net primary production (NPP) on Earth, fixing atmospheric CO2 into food that fuels global ocean ecosystems and drives the ocean's biogeochemical cycles. Phytoplankton growth is highly sensitive to variations in ocean physical properties, such as upper ocean stratification and light availability within this mixed layer. Satellite ocean color sensors, such as the Sea-viewing Wide Field-of-view Sensor (SeaWiFS; McClain 2009) and Moderate Resolution Imaging Spectroradiometer (MODIS; Esaias 1998), provide observations of sufficient frequency and geographic coverage to globally monitor physically-driven changes in phytoplankton distributions. In practice, ocean color sensors retrieve the spectral distribution of visible solar radiation reflected upward from beneath the ocean surface, which can then be related to changes in the photosynthetic phytoplankton pigment, chlorophyll- a (Chla; measured in mg m-3). Here, global Chla data for 2013 are evaluated within the context of the 16-year continuous record provided through the combined observations of SeaWiFS (1997-2010) and MODIS on Aqua (MODISA; 2002-present). Ocean color measurements from the recently launched Visible and Infrared Imaging Radiometer Suite (VIIRS; 2011-present) are also considered, but results suggest that the temporal calibration of the VIIRS sensor is not yet sufficiently stable for quantitative global change studies. All MODISA (version 2013.1), SeaWiFS (version 2010.0), and VIIRS (version 2013.1) data presented here were produced by NASA using consistent Chla algorithms.

  14. Inkjet printing of conjugated polymer precursors on paper substrates for colorimetric sensing and flexible electrothermochromic display.

    PubMed

    Yoon, Bora; Ham, Dae-Young; Yarimaga, Oktay; An, Hyosung; Lee, Chan Woo; Kim, Jong-Man

    2011-12-08

    Inkjet-printable aqueous suspensions of conjugated polymer precursors are developed for fabrication of patterned color images on paper substrates. Printing of a diacetylene (DA)-surfactant composite ink on unmodified paper and photopaper, as well as on a banknote, enables generation of latent images that are transformed to blue-colored polydiacetylene (PDA) structures by UV irradiation. Both irreversible and reversible thermochromism with the PDA printed images are demonstrated and applied to flexible and disposable sensors and to displays. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.

    PubMed

    Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre

    2017-06-03

    Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.

  16. Dynamic Range and Sensitivity Requirements of Satellite Ocean Color Sensors: Learning from the Past

    NASA Technical Reports Server (NTRS)

    Hu, Chuanmin; Feng, Lian; Lee, Zhongping; Davis, Curtiss O.; Mannino, Antonio; McClain, Charles R.; Franz, Bryan A.

    2012-01-01

    Sensor design and mission planning for satellite ocean color measurements requires careful consideration of the signal dynamic range and sensitivity (specifically here signal-to-noise ratio or SNR) so that small changes of ocean properties (e.g., surface chlorophyll-a concentrations or Chl) can be quantified while most measurements are not saturated. Past and current sensors used different signal levels, formats, and conventions to specify these critical parameters, making it difficult to make cross-sensor comparisons or to establish standards for future sensor design. The goal of this study is to quantify these parameters under uniform conditions for widely used past and current sensors in order to provide a reference for the design of future ocean color radiometers. Using measurements from the Moderate Resolution Imaging Spectroradiometer onboard the Aqua satellite (MODISA) under various solar zenith angles (SZAs), typical (L(sub typical)) and maximum (L(sub max)) at-sensor radiances from the visible to the shortwave IR were determined. The Ltypical values at an SZA of 45 deg were used as constraints to calculate SNRs of 10 multiband sensors at the same L(sub typical) radiance input and 2 hyperspectral sensors at a similar radiance input. The calculations were based on clear-water scenes with an objective method of selecting pixels with minimal cross-pixel variations to assure target homogeneity. Among the widely used ocean color sensors that have routine global coverage, MODISA ocean bands (1 km) showed 2-4 times higher SNRs than the Sea-viewing Wide Field-of-view Sensor (Sea-WiFS) (1 km) and comparable SNRs to the Medium Resolution Imaging Spectrometer (MERIS)-RR (reduced resolution, 1.2 km), leading to different levels of precision in the retrieved Chl data product. MERIS-FR (full resolution, 300 m) showed SNRs lower than MODISA and MERIS-RR with the gain in spatial resolution. SNRs of all MODISA ocean bands and SeaWiFS bands (except the SeaWiFS near-IR bands) exceeded those from prelaunch sensor specifications after adjusting the input radiance to L(sub typical). The tabulated L(sub typical), L(sub max), and SNRs of the various multiband and hyperspectral sensors under the same or similar radiance input provide references to compare sensor performance in product precision and to help design future missions such as the Geostationary Coastal and Air Pollution Events (GEO-CAPE) mission and the Pre-Aerosol-Clouds-Ecosystems (PACE) mission currently being planned by the U.S. National Aeronautics and Space Administration (NASA).

  17. Electrically actuatable temporal tristimulus-color device

    DOEpatents

    Koehler, Dale R.

    1992-01-01

    The electrically actuated light filter operates in a cyclical temporal mode to effect a tristimulus-color light analyzer. Construction is based on a Fabry-Perot interferometer comprised of a high-speed movable mirror pair and cyclically powered electrical actuators. When combined with a single vidicon tube or a monochrome solid state image sensor, a temporally operated tristimulus-color video camera is effected. A color-generated is accomplished when constructed with a companion light source and is a flicker-free colored-light source for transmission type display systems. Advantages of low cost and small physical size result from photolithographic batch-processing manufacturability.

  18. Ratiometric sensing of fluoride and acetate anions based on a BODIPY-azaindole platform and its application to living cell imaging.

    PubMed

    Mahapatra, Ajit Kumar; Maji, Rajkishor; Maiti, Kalipada; Adhikari, Susanta Sekhar; Das Mukhopadhyay, Chitrangada; Mandal, Debasish

    2014-01-07

    A new BODIPY-azaindole based fluorescent sensor 1 was designed and synthesized as a new colorimetric and ratiometric fluorescent chemosensor for fluoride. The binding and sensing abilities of sensor 1 towards various anions were studied by absorption, emission and (1)H NMR titration spectroscopies. The spectral responses of 1 to fluoride in acetonitrile-water were studied: an approximately 69 nm red shift in absorption and ratiometric fluorescent response was observed. The striking light yellow to deep brown color change in ambient light and green to blue emission color change are thought to be due to the deprotonation of the indole moiety of the azaindole fluorophore. From the changes in the absorption, fluorescence, and (1)H NMR titration spectra, proton-transfer mechanisms were deduced. Density function theory and time-dependent density function theory calculations were conducted to rationalize the optical response of the sensor. Results were supported by confocal fluorescence imaging and MTT assay of live cells.

  19. Microlens performance limits in sub-2mum pixel CMOS image sensors.

    PubMed

    Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B

    2010-03-15

    CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.

  20. A Plenoptic Multi-Color Imaging Pyrometer

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Hutchins, William D.; Fahringer, Timothy; Thurow, Brian S.

    2017-01-01

    A three-color pyrometer has been developed based on plenoptic imaging technology. Three bandpass filters placed in front of a camera lens allow separate 2D images to be obtained on a single image sensor at three different and adjustable wavelengths selected by the user. Images were obtained of different black- or grey-bodies including a calibration furnace, a radiation heater, and a luminous sulfur match flame. The images obtained of the calibration furnace and radiation heater were processed to determine 2D temperature distributions. Calibration results in the furnace showed that the instrument can measure temperature with an accuracy and precision of 10 Kelvins between 1100 and 1350 K. Time-resolved 2D temperature measurements of the radiation heater are shown.

  1. HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing

    PubMed Central

    Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori

    2018-01-01

    Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022

  2. Daylight coloring for monochrome infrared imagery

    NASA Astrophysics Data System (ADS)

    Gabura, James

    2015-05-01

    The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.

  3. Achromatic-chromatic colorimetric sensors for on-off type detection of analytes.

    PubMed

    Heo, Jun Hyuk; Cho, Hui Hun; Lee, Jin Woong; Lee, Jung Heon

    2014-12-21

    We report the development of achromatic colorimetric sensors; sensors changing their colors from achromatic black to other chromatic colors. An achromatic colorimetric sensor was prepared by mixing a general colorimetric indicator, whose color changes between chromatic colors, and a complementary colored dye with no reaction to the targeted analyte. As the color of an achromatic colorimetric sensor changes from black to a chromatic color, the color change could be much easily recognized than general colorimetric sensors with naked eyes. More importantly, the achromatic colorimetric sensors enable on-off type recognition of the presence of analytes, which have not been achieved from most colorimetric sensors. In addition, the color changes from some achromatic colorimetric sensors (achromatic Eriochrome Black T and achromatic Benedict's solution) could be recognized with naked eyes at much lower concentration ranges than normal chromatic colorimetric sensors. These results provide new opportunities in the use of colorimetric sensors for diverse applications, such as harsh industrial, environmental, and biological detection.

  4. Comparison of optics and performance of a distal sensor high definition cystoscope, a distal sensor standard definition cystoscope, and a fiberoptic cystoscope.

    PubMed

    Lusch, Achim; Liss, Michael A; Greene, Peter; Abdelshehid, Corollos; Menhadji, Ashleigh; Bucur, Philip; Alipanah, Reza; McDougall, Elspeth; Landman, Jaime

    2013-12-01

    To evaluate performance characteristics and optics of a new generation high-definition distal sensor (HD-DS) flexible cystoscope, a standard-definition distal sensor (SD-DS) cystoscope, and a standard fiberoptic (FO) cystoscope. Three new cystoscopes (HD-DS, SD-DS, and FO) were compared for active deflection, irrigation flow, and optical characteristics. Each cystoscope was evaluated with an empty working channel and with various accessories. Optical characteristics (resolution, grayscale imaging, color representation, depth of field, and image brightness) were measured using United States Air Force (USAF)/Edmund Optics test targets and illumination meter. We digitally recorded a porcine cystoscopy in both clear and blood fields, with subsequent video analysis by 8 surgeons via questionnaire. The HD-DS had a higher resolution than the SD-DS and the FO at both 20 mm (6.35 vs 4.00 vs 2.24 line pairs/mm) and 10 mm (14.3 vs 7.13 vs 4.00 line pairs/mm) evaluations, respectively (P <.001 and P <.001). Color representation and depth of field (P = .001 and P <.001) were better in the HD-DS. When compared to the FO, the HD-DS and SD-DS demonstrated superior deflection up and irrigant flow with and without accessory present in the working channel, whereas image brightness was superior in the FO (P <.001, P = .001, and P <.001, respectively). Observers deemed the HD-DS cystoscope superior in visualization in clear and bloody fields, as well as for illumination. The new HD-DS provided significantly improved visualization in a clear and a bloody field, resolution, color representation, and depth of field compared to SD-DS and FO. Clinical correlation of these findings is pending. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Online prediction of organileptic data for snack food using color images

    NASA Astrophysics Data System (ADS)

    Yu, Honglu; MacGregor, John F.

    2004-11-01

    In this paper, a study for the prediction of organileptic properties of snack food in real-time using RGB color images is presented. The so-called organileptic properties, which are properties based on texture, taste and sight, are generally measured either by human sensory response or by mechanical devices. Neither of these two methods can be used for on-line feedback control in high-speed production. In this situation, a vision-based soft sensor is very attractive. By taking images of the products, the samples remain untouched and the product properties can be predicted in real time from image data. Four types of organileptic properties are considered in this study: blister level, toast points, taste and peak break force. Wavelet transform are applied on the color images and the averaged absolute value for each filtered image is used as texture feature variable. In order to handle the high correlation among the feature variables, Partial Least Squares (PLS) is used to regress the extracted feature variables against the four response variables.

  6. Sensor node for remote monitoring of waterborne disease-causing bacteria.

    PubMed

    Kim, Kyukwang; Myung, Hyun

    2015-05-05

    A sensor node for sampling water and checking for the presence of harmful bacteria such as E. coli in water sources was developed in this research. A chromogenic enzyme substrate assay method was used to easily detect coliform bacteria by monitoring the color change of the sampled water mixed with a reagent. Live webcam image streaming to the web browser of the end user with a Wi-Fi connected sensor node shows the water color changes in real time. The liquid can be manipulated on the web-based user interface, and also can be observed by webcam feeds. Image streaming and web console servers run on an embedded processor with an expansion board. The UART channel of the expansion board is connected to an external Arduino board and a motor driver to control self-priming water pumps to sample the water, mix the reagent, and remove the water sample after the test is completed. The sensor node can repeat water testing until the test reagent is depleted. The authors anticipate that the use of the sensor node developed in this research can decrease the cost and required labor for testing samples in a factory environment and checking the water quality of local water sources in developing countries.

  7. Determination of Primary Spectral Bands for Remote Sensing of Aquatic Environments

    PubMed Central

    Lee, ZhongPing; Carder, Kendall; Arnone, Robert; He, MingXia

    2007-01-01

    About 30 years ago, NASA launched the first ocean-color observing satellite: the Coastal Zone Color Scanner. CZCS had 5 bands in the visible-infrared domain with an objective to detect changes of phytoplankton (measured by concentration of chlorophyll) in the oceans. Twenty years later, for the same objective but with advanced technology, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS, 7 bands), the Moderate-Resolution Imaging Spectrometer (MODIS, 8 bands), and the Medium Resolution Imaging Spectrometer (MERIS, 12 bands) were launched. The selection of the number of bands and their positions was based on experimental and theoretical results achieved before the design of these satellite sensors. Recently, Lee and Carder (2002) demonstrated that for adequate derivation of major properties (phytoplankton biomass, colored dissolved organic matter, suspended sediments, and bottom properties) in both oceanic and coastal environments from observation of water color, it is better for a sensor to have ∼15 bands in the 400 – 800 nm range. In that study, however, it did not provide detailed analyses regarding the spectral locations of the 15 bands. Here, from nearly 400 hyperspectral (∼ 3-nm resolution) measurements of remote-sensing reflectance (a measure of water color) taken in both coastal and oceanic waters covering both optically deep and optically shallow waters, first- and second-order derivatives were calculated after interpolating the measurements to 1-nm resolution. From these derivatives, the frequency of zero values for each wavelength was accounted for, and the distribution spectrum of such frequencies was obtained. Furthermore, the wavelengths that have the highest appearance of zeros were identified. Because these spectral locations indicate extrema (a local maximum or minimum) of the reflectance spectrum or inflections of the spectral curvature, placing the bands of a sensor at these wavelengths maximizes the potential of capturing (and then restoring) the spectral curve, and thus maximizes the potential of accurately deriving properties of the water column and/or bottom of various aquatic environments with a multi-band sensor. PMID:28903303

  8. Smartphone-Based VOC Sensor Using Colorimetric Polydiacetylenes.

    PubMed

    Park, Dong-Hoon; Heo, Jung-Moo; Jeong, Woomin; Yoo, Young Hyuk; Park, Bum Jun; Kim, Jong-Man

    2018-02-07

    Owing to a unique colorimetric (typically blue-to-red) feature upon environmental stimulation, polydiacetylenes (PDAs) have been actively employed in chemosensor systems. We developed a highly accurate and simple volatile organic compound (VOC) sensor system that can be operated using a conventional smartphone. The procedure begins with forming an array of four different PDAs on conventional paper using inkjet printing of four corresponding diacetylenes followed by photopolymerization. A database of color changes (i.e., red and hue values) is then constructed on the basis of different solvatochromic responses of the 4 PDAs to 11 organic solvents. Exposure of the PDA array to an unknown solvent promotes color changes, which are imaged using a smartphone camera and analyzed using the app. A comparison of the color changes to the database promoted by the 11 solvents enables the smartphone app to identify the unknown solvent with 100% accuracy. Additionally, it was demonstrated that the PDA array sensor was sufficiently sensitive to accurately detect the 11 VOC gases.

  9. Application of EREP, LANDSAT, and aircraft image data to environmental problems related to coal mining

    NASA Technical Reports Server (NTRS)

    Amato, R. V.; Russell, O. R.; Martin, K. R.; Wier, C. E.

    1975-01-01

    Remote sensing techniques were used to study coal mining sites within the Eastern Interior Coal Basin (Indiana, Illinois, and western Kentucky), the Appalachian Coal Basin (Ohio, West Virginia, and Pennsylvania) and the anthracite coal basins of northeastern Pennsylvania. Remote sensor data evaluated during these studies were acquired by LANDSAT, Skylab and both high and low altitude aircraft. Airborne sensors included multispectral scanners, multiband cameras and standard mapping cameras loaded with panchromatic, color and color infrared films. The research conducted in these areas is a useful prerequisite to the development of an operational monitoring system that can be peridically employed to supply state and federal regulatory agencies with supportive data. Further research, however, must be undertaken to systematically examine those mining processes and features that can be monitored cost effectively using remote sensors and for determining what combination of sensors and ground sampling processes provide the optimum combination for an operational system.

  10. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  11. Applications of Geostationary Ocean Color Imager (GOCI) observations

    NASA Astrophysics Data System (ADS)

    Park, Y. J.

    2016-02-01

    Ocean color remote-sensing technique opened a new era for biological oceanography by providing the global distribution of phytoplankton biomass every a few days. It has been proved useful for a variety of applications in coastal waters as well as oceanic waters. However, most ocean color sensors deliver less than one image per day for low and middle latitude areas, and this once a day image is insufficient to resolve transient or high frequency processes. Korean Geostationary Ocean Color Imager (GOCI), the first ever ocean color instrument operated on geostationary orbit, is collecting ocean color radiometry (OCR) data (multi-band radiances at the visible to NIR spectral wavelengths) since July, 2010. GOCI has an unprecedented capability to provide eight OCR images a day with a 500m resolution for the North East Asian seas Monitoring the spatial and temporal variability is important to understand many processes occurring in open ocean and coastal environments. With a series of images consecutively acquired by GOCI, we are now able to look into (sub-)diurnal variabilities of coastal ocean color products such as phytoplankton biomass, suspended particles concentrations, and primary production. The eight images taken a day provide another way to derive maps of ocean current velocity. Compared to polar orbiters, GOCI delivers more frequent images with constant viewing angle, which enables to better monitor and thus respond to coastal water issues such as harmful algal blooms, floating green and brown algae. The frequent observation capability for local area allows us to respond timely to natural disasters and hazards. GOCI images are often useful to identify sea fog, sea ice, wild fires, volcanic eruptions, transport of dust aerosols, snow covered area, etc.

  12. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  13. MOBY, A Radiometric Buoy for Performance Monitoring and Vicarious Calibration of Satellite Ocean Color Sensors: Measurement and Data Analysis Protocols. Chapter 2

    NASA Technical Reports Server (NTRS)

    Clark, Dennis K.; Yarbrough, Mark A.; Feinholz, Mike; Flora, Stephanie; Broenkow, William; Kim, Yong Sung; Johnson, B. Carol; Brown, Steven W.; Yuen, Marilyn; Mueller, James L.

    2003-01-01

    The Marine Optical Buoy (MOBY) is the centerpiece of the primary ocean measurement site for calibration of satellite ocean color sensors based on independent in situ measurements. Since late 1996, the time series of normalized water-leaving radiances L(sub WN)(lambda) determined from the array of radiometric sensors attached to MOBY are the primary basis for the on-orbit calibrations of the USA Sea-viewing Wide Field-of-view Sensor (SeaWiFS), the Japanese Ocean Color and Temperature Sensor (OCTS), the French Polarization Detection Environmental Radiometer (POLDER), the German Modular Optoelectronic Scanner on the Indian Research Satellite (IRS1-MOS), and the USA Moderate Resolution Imaging Spectrometer (MODIS). The MOBY vicarious calibration L(sub WN)(lambda) reference is an essential element in the international effort to develop a global, multi-year time series of consistently calibrated ocean color products using data from a wide variety of independent satellite sensors. A longstanding goal of the SeaWiFS and MODIS (Ocean) Science Teams is to determine satellite-derived L(sub WN)(labda) with a relative combined standard uncertainty of 5 %. Other satellite ocean color projects and the Sensor Intercomparison for Marine Biology and Interdisciplinary Oceanic Studies (SIMBIOS) project have also adopted this goal, at least implicitly. Because water-leaving radiance contributes at most 10 % of the total radiance measured by a satellite sensor above the atmosphere, a 5 % uncertainty in L(sub WN)(lambda) implies a 0.5 % uncertainty in the above-atmosphere radiance measurements. This level of uncertainty can only be approached using vicarious-calibration approaches as described below. In practice, this means that the satellite radiance responsivity is adjusted to achieve the best agreement, in a least-squares sense, for the L(sub WN)(lambda) results determined using the satellite and the independent optical sensors (e.g. MOBY). The end result of this approach is to implicitly absorb unquantified, but systematic, errors in the atmospheric correction, incident solar flux, and satellite sensor calibration into a single correction factor to produce consistency with the in situ data.

  14. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  15. Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.

    PubMed

    Salama, Mhd Suhyb; Su, Zhongbo

    2010-01-01

    A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.

  16. Statistical Evaluation of VIIRS Ocean Color Products

    NASA Astrophysics Data System (ADS)

    Mikelsons, K.; Wang, M.; Jiang, L.

    2016-02-01

    Evaluation and validation of satellite-derived ocean color products is a complicated task, which often relies on precise in-situ measurements for satellite data quality assessment. However, in-situ measurements are only available in comparatively few locations, expensive, and not for all times. In the open ocean, the variability in spatial and temporal scales is longer, and the water conditions are generally more stable. We use this fact to perform extensive statistical evaluations of consistency for ocean color retrievals based on comparison of retrieved data at different times, and corresponding to various retrieval parameters. We have used the NOAA Multi-Sensor Level-1 to Level-2 (MSL12) ocean color data processing system for ocean color product data derived from the Visible Infrared Imaging Radiometer Suite (VIIRS). We show the results for statistical dependence of normalized water-leaving radiance spectra with respect to various parameters of retrieval geometry, such as solar- and sensor-zenith angles, as well as physical variables, such as wind speed, air pressure, ozone amount, water vapor, etc. In most cases, the results show consistent retrievals within the relevant range of retrieval parameters, showing a good performance with the MSL12 in the open ocean. The results also yield the upper bounds of solar- and sensor-zenith angles for reliable ocean color retrievals, and also show a slight increase of VIIRS-derived normalized water-leaving radiances with wind speed and water vapor concentration.

  17. Bayesian demosaicing using Gaussian scale mixture priors with local adaptivity in the dual tree complex wavelet packet transform domain

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried

    2013-02-01

    In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.

  18. Deep neural network using color and synthesized three-dimensional shape for face recognition

    NASA Astrophysics Data System (ADS)

    Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun

    2017-03-01

    We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.

  19. A high-resolution three-dimensional far-infrared thermal and true-color imaging system for medical applications.

    PubMed

    Cheng, Victor S; Bai, Jinfen; Chen, Yazhu

    2009-11-01

    As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.

  20. Cloud screening Coastal Zone Color Scanner images using channel 5

    NASA Technical Reports Server (NTRS)

    Eckstein, B. A.; Simpson, J. J.

    1991-01-01

    Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.

  1. Dynamic image fusion and general observer preference

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2010-04-01

    Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.

  2. The Hyperspectral Imager for the Coastal Ocean (HICO): Sensor and Data Processing Overview

    DTIC Science & Technology

    2010-01-20

    backscattering coefficients, and others. Several of these software modules will be developed within the Automated Processing System (APS), a data... Automated Processing System (APS) NRL developed APS, which processes satellite data into ocean color data products. APS is a collection of methods...used for ocean color processing which provide the tools for the automated processing of satellite imagery [1]. These tools are in the process of

  3. Dragon Lake, Siberia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Nicknamed 'Dragon Lake,' this body of water is formed by the Bratskove Reservoir, built along the Angara river in southern Siberia, near the city of Bratsk. This image was acquired in winter, when the lake is frozen. This image was acquired by Landsat 7's Enhanced Thematic Mapper plus (ETM+) sensor on December 19, 1999. This is a natural color composite image made using blue, green, and red wavelengths. Image provided by the USGS EROS Data Center Satellite Systems Branch

  4. A machine vision system for high speed sorting of small spots on grains

    USDA-ARS?s Scientific Manuscript database

    A sorting system was developed to detect and remove individual grain kernels with small localized blemishes or defects. The system uses a color VGA sensor to capture images of the kernels at high speed as the grain drops off an inclined chute. The image data are directly input into a field-programma...

  5. Image quality evaluation of color displays using a Fovean color camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro

    2007-03-01

    This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  6. Adaptive color demosaicing and false color removal

    NASA Astrophysics Data System (ADS)

    Guarnera, Mirko; Messina, Giuseppe; Tomaselli, Valeria

    2010-04-01

    Color interpolation solutions drastically influence the quality of the whole image generation pipeline, so they must guarantee the rendering of high quality pictures by avoiding typical artifacts such as blurring, zipper effects, and false colors. Moreover, demosaicing should avoid emphasizing typical artifacts of real sensors data, such as noise and green imbalance effect, which would be further accentuated by the subsequent steps of the processing pipeline. We propose a new adaptive algorithm that decides the interpolation technique to apply to each pixel, according to its neighborhood analysis. Edges are effectively interpolated through a directional filtering approach that interpolates the missing colors, selecting the suitable filter depending on edge orientation. Regions close to edges are interpolated through a simpler demosaicing approach. Thus flat regions are identified and low-pass filtered to eliminate some residual noise and to minimize the annoying green imbalance effect. Finally, an effective false color removal algorithm is used as a postprocessing step to eliminate residual color errors. The experimental results show how sharp edges are preserved, whereas undesired zipper effects are reduced, improving the edge resolution itself and obtaining superior image quality.

  7. NASA COAST and OCEANIA Airborne Missions in Support of Ecosystem and Water Quality Research in the Coastal Zone

    NASA Technical Reports Server (NTRS)

    Guild, Liane S.; Hooker, Stanford B.; Kudela, Raphael; Morrow, John; Russell, Philip; Myers, Jeffrey; Dunagan, Stephen; Palacios, Sherry; Livingston, John; Negrey, Kendra; hide

    2015-01-01

    Worldwide, coastal marine ecosystems are exposed to land-based sources of pollution and sedimentation from anthropogenic activities including agriculture and coastal development. Ocean color products from satellite sensors provide information on chlorophyll (phytoplankton pigment), sediments, and colored dissolved organic material. Further, ship-based in-water measurements and emerging airborne measurements provide in situ data for the vicarious calibration of current and next generation satellite ocean color sensors and to validate the algorithms that use the remotely sensed observations. Recent NASA airborne missions over Monterey Bay, CA, have demonstrated novel above- and in-water measurement capabilities supporting a combined airborne sensor approach (imaging spectrometer, microradiometers, and a sun photometer). The results characterize coastal atmospheric and aquatic properties through an end-to-end assessment of image acquisition, atmospheric correction, algorithm application, plus sea-truth observations from state-of-the-art instrument systems. The primary goal of the airborne missions was to demonstrate the following in support of calibration and validation exercises for satellite coastal ocean color products: 1) the utility of a multi-sensor airborne instrument suite to assess the bio-optical properties of coastal California, including water quality; and 2) the importance of contemporaneous atmospheric measurements to improve atmospheric correction in the coastal zone. Utilizing an imaging spectrometer optimized in the blue to green spectral domain enables higher signal for detection of the relatively dark radiance measurements from marine and freshwater ecosystem features. The novel airborne instrument, Coastal Airborne In-situ Radiometers (C-AIR) provides measurements of apparent optical properties with high dynamic range and fidelity for deriving exact water leaving radiances at the land-ocean boundary, including radiometrically shallow aquatic ecosystems. Simultaneous measurements supporting empirical atmospheric correction of image data were accomplished using the Ames Airborne Tracking Sunphotometer (AATS-14). Flight operations are presented for the instrument payloads using the CIRPAS Twin Otter flown over Monterey Bay during the seasonal fall algal bloom in 2011 (COAST) and 2013 (OCEANIA) to support bio-optical measurements of phytoplankton for coastal zone research. Further, this airborne capability can be responsive to first flush rain events that deliver higher concentrations of sediments and pollution to coastal waters via watersheds and overland flow.

  8. New experimental diffractive-optical data on E.Land's Retinex mechanism in human color vision: Part II

    NASA Astrophysics Data System (ADS)

    Lauinger, N.

    2007-09-01

    A better understanding of the color constancy mechanism in human color vision [7] can be reached through analyses of photometric data of all illuminants and patches (Mondrians or other visible objects) involved in visual experiments. In Part I [3] and in [4, 5 and 6] the integration in the human eye of the geometrical-optical imaging hardware and the diffractive-optical hardware has been described and illustrated (Fig.1). This combined hardware represents the main topic of the NAMIROS research project (nano- and micro- 3D gratings for optical sensors) [8] promoted and coordinated by Corrsys 3D Sensors AG. The hardware relevant to (photopic) human color vision can be described as a diffractive or interference-optical correlator transforming incident light into diffractive-optical RGB data and relating local RGB onto global RGB data in the near-field behind the 'inverted' human retina. The relative differences at local/global RGB interference-optical contrasts are available to photoreceptors (cones and rods) only after this optical pre-processing.

  9. Black light - How sensors filter spectral variation of the illuminant

    NASA Technical Reports Server (NTRS)

    Brainard, David H.; Wandell, Brian A.; Cowan, William B.

    1989-01-01

    Visual sensor responses may be used to classify objects on the basis of their surface reflectance functions. In a color image, the image data are represented as a vector of sensor responses at each point in the image. This vector depends both on the surface reflectance functions and on the spectral power distribution of the ambient illumination. Algorithms designed to classify objects on the basis of their surface reflectance functions typically attempt to overcome the dependence of the sensor responses on the illuminant by integrating sensor data collected from multiple surfaces. In machine vision applications, it is shown that it is often possible to design the sensor spectral responsivities so that the vector direction of the sensor responses does not depend upon the illuminant. The conditions under which this is possible are given and an illustrative calculation is performed. In biological systems, where the sensor responsivities are fixed, it is shown that some changes in the illumination cause no change in the sensor responses. Such changes in illuminant are called black illuminants. It is possible to express any illuminant as the sum of two unique components. One component is a black illuminant. The second component is called the visible component. The visible component of an illuminant completely characterizes the effect of the illuminant on the vector of sensor responses.

  10. Contact CMOS imaging of gaseous oxygen sensor array

    PubMed Central

    Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.

    2014-01-01

    We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909

  11. Contact CMOS imaging of gaseous oxygen sensor array.

    PubMed

    Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V

    2011-10-01

    We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.

  12. Electrochromic Molecular Imprinting Sensor for Visual and Smartphone-Based Detections.

    PubMed

    Capoferri, Denise; Álvarez-Diduk, Ruslan; Del Carlo, Michele; Compagnone, Dario; Merkoçi, Arben

    2018-05-01

    Electrochromic effect and molecularly imprinted technology have been used to develop a sensitive and selective electrochromic sensor. The polymeric matrices obtained using the imprinting technology are robust molecular recognition elements and have the potential to mimic natural recognition entities with very high selectivity. The electrochromic behavior of iridium oxide nanoparticles (IrOx NPs) as physicochemical transducer together with a molecularly imprinted polymer (MIP) as recognition layer resulted in a fast and efficient translation of the detection event. The sensor was fabricated using screen-printing technology with indium tin oxide as a transparent working electrode; IrOx NPs where electrodeposited onto the electrode followed by thermal polymerization of polypyrrole in the presence of the analyte (chlorpyrifos). Two different approaches were used to detect and quantify the pesticide: direct visual detection and smartphone imaging. Application of different oxidation potentials for 10 s resulted in color changes directly related to the concentration of the analyte. For smartphone imaging, at fixed potential, the concentration of the analyte was dependent on the color intensity of the electrode. The electrochromic sensor detects a highly toxic compound (chlorpyrifos) with a 100 fM and 1 mM dynamic range. So far, to the best of our knowledge, this is the first work where an electrochromic MIP sensor uses the electrochromic properties of IrOx to detect a certain analyte with high selectivity and sensitivity.

  13. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  14. Plant features measurements for robotics

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1989-01-01

    Initial studies of the technical feasibility of using machine vision and color image processing to measure plant health were performed. Wheat plants were grown in nutrient solutions deficient in nitrogen, potassium, and iron. An additional treatment imposed water stress on wheat plants which received a full complement of nutrients. The results for juvenile (less than 2 weeks old) wheat plants show that imaging technology can be used to detect nutrient deficiencies. The relative amount of green color in a leaf declined with increased water stress. The absolute amount of green was higher for nitrogen deficient leaves compared to the control plants. Relative greenness was lower for iron deficient leaves, but the absolute green values were higher. The data showed patterns across the leaf consistent with visual symptons. The development of additional color image processing routines to recognize these patterns would improve the performance of this sensor of plant health.

  15. Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors

    PubMed Central

    Salama, Mhd. Suhyb; Su, Zhongbo

    2010-01-01

    A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors. PMID:22163615

  16. Adjustment of multi-CCD-chip-color-camera heads

    NASA Astrophysics Data System (ADS)

    Guyenot, Volker; Tittelbach, Guenther; Palme, Martin

    1999-09-01

    The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.

  17. Radiation Hardening of Digital Color CMOS Camera-on-a-Chip Building Blocks for Multi-MGy Total Ionizing Dose Environments

    NASA Astrophysics Data System (ADS)

    Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé

    2017-01-01

    The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.

  18. A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications.

    PubMed

    Moeys, Diederik Paul; Corradi, Federico; Li, Chenghan; Bamford, Simeon A; Longinotti, Luca; Voigt, Fabian F; Berry, Stewart; Taverni, Gemma; Helmchen, Fritjof; Delbruck, Tobi

    2018-02-01

    Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f.

  19. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  20. Color (RGB) imaging laser radar

    NASA Astrophysics Data System (ADS)

    Ferri De Collibus, M.; Bartolini, L.; Fornetti, G.; Francucci, M.; Guarneri, M.; Nuvoli, M.; Paglia, E.; Ricci, R.

    2008-03-01

    We present a new color (RGB) imaging 3D laser scanner prototype recently developed in ENEA, Italy). The sensor is based on AM range finding technique and uses three distinct beams (650nm, 532nm and 450nm respectively) in monostatic configuration. During a scan the laser beams are simultaneously swept over the target, yielding range and three separated channels (R, G and B) of reflectance information for each sampled point. This information, organized in range and reflectance images, is then elaborated to produce very high definition color pictures and faithful, natively colored 3D models. Notable characteristics of the system are the absence of shadows in the acquired reflectance images - due to the system's monostatic setup and intrinsic self-illumination capability - and high noise rejection, achieved by using a narrow field of view and interferential filters. The system is also very accurate in range determination (accuracy better than 10 -4) at distances up to several meters. These unprecedented features make the system particularly suited to applications in the domain of cultural heritage preservation, where it could be used by conservators for examining in detail the status of degradation of frescoed walls, monuments and paintings, even at several meters of distance and in hardly accessible locations. After providing some theoretical background, we describe the general architecture and operation modes of the color 3D laser scanner, by reporting and discussing first experimental results and comparing high-definition color images produced by the instrument with photographs of the same subjects taken with a Nikon D70 digital camera.

  1. A mixture model for robust registration in Kinect sensor

    NASA Astrophysics Data System (ADS)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  2. Fires in Philippines

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Roughly a dozen fires (red pixels) dotted the landscape on the main Philippine island of Luzon on April 1, 2002. This true-color image was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra spacecraft. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of this scene at the sensor's fullest resolution, visit the MODIS Rapidfire site.

  3. Project Report: Reducing Color Rivalry in Imagery for Conjugated Multiple Bandpass Filter Based Stereo Endoscopy

    NASA Technical Reports Server (NTRS)

    Ream, Allen

    2011-01-01

    A pair of conjugated multiple bandpass filters (CMBF) can be used to create spatially separated pupils in a traditional lens and imaging sensor system allowing for the passive capture of stereo video. This method is especially useful for surgical endoscopy where smaller cameras are needed to provide ample room for manipulating tools while also granting improved visualizations of scene depth. The significant issue in this process is that, due to the complimentary nature of the filters, the colors seen through each filter do not match each other, and also differ from colors as seen under a white illumination source. A color correction model was implemented that included optimized filter selection, such that the degree of necessary post-processing correction was minimized, and a chromatic adaptation transformation that attempted to fix the imaged colors tristimulus indices based on the principle of color constancy. Due to fabrication constraints, only dual bandpass filters were feasible. The theoretical average color error after correction between these filters was still above the fusion limit meaning that rivalry conditions are possible during viewing. This error can be minimized further by designing the filters for a subset of colors corresponding to specific working environments.

  4. Digital camera auto white balance based on color temperature estimation clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong

    2010-11-01

    Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.

  5. Color calibration and color-managed medical displays: does the calibration method matter?

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Rehm, Kelly; Silverstein, Louis D.; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.

    2010-02-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ▵E = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target.

  6. Real Data and Rapid Results: Ocean Color Data Analysis with Giovanni (GES DISC Interactive Online Visualization and ANalysis Infrastructure)

    NASA Technical Reports Server (NTRS)

    Acker, J. G.; Leptoukh, G.; Kempler, S.; Gregg, W.; Berrick, S.; Zhu, T.; Liu, Z.; Rui, H.; Shen, S.

    2004-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has taken a major step addressing the challenge of using archived Earth Observing System (EOS) data for regional or global studies by developing an infrastructure with a World Wide Web interface which allows online, interactive, data analysis: the GES DISC Interactive Online Visualization and ANalysis Infrastructure, or "Giovanni." Giovanni provides a data analysis environment that is largely independent of underlying data file format. The Ocean Color Time-Series Project has created an initial implementation of Giovanni using monthly Standard Mapped Image (SMI) data products from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) mission. Giovanni users select geophysical parameters, and the geographical region and time period of interest. The system rapidly generates a graphical or ASCII numerical data output. Currently available output options are: Area plot (averaged or accumulated over any available data period for any rectangular area); Time plot (time series averaged over any rectangular area); Hovmeller plots (image view of any longitude-time and latitude-time cross sections); ASCII output for all plot types; and area plot animations. Future plans include correlation plots, output formats compatible with Geographical Information Systems (GIs), and higher temporal resolution data. The Ocean Color Time-Series Project will produce sensor-independent ocean color data beginning with the Coastal Zone Color Scanner (CZCS) mission and extending through SeaWiFS and Moderate Resolution Imaging Spectroradiometer (MODIS) data sets, and will enable incorporation of Visible/lnfrared Imaging Radiometer Suite (VIIRS) data, which will be added to Giovanni. The first phase of Giovanni will also include tutorials demonstrating the use of Giovanni and collaborative assistance in the development of research projects using the SeaWiFS and Ocean Color Time-Series Project data in the online Laboratory for Ocean Color Users (LOCUS). The synergy of Giovanni with high-quality ocean color data provides users with the ability to investigate a variety of important oceanic phenomena, such as coastal primary productivity related to pelagic fisheries, seasonal patterns and interannual variability, interdependence of atmospheric dust aerosols and harmful algal blooms, and the potential effects of climate change on oceanic productivity.

  7. A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation.

    PubMed

    Li, Yuecheng; Jia, Wenyan; Yu, Tianjian; Luan, Bo; Mao, Zhi-Hong; Zhang, Hong; Sun, Mingui

    2015-04-01

    In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle.

  8. The Use of Color Sensors for Spectrographic Calibration

    NASA Astrophysics Data System (ADS)

    Thomas, Neil B.

    2018-04-01

    The wavelength calibration of spectrographs is an essential but challenging task in many disciplines. Calibration is traditionally accomplished by imaging the spectrum of a light source containing features that are known to appear at certain wavelengths and mapping them to their location on the sensor. This is typically required in conjunction with each scientific observation to account for mechanical and optical variations of the instrument over time, which may span years for certain projects. The method presented here investigates the usage of color itself instead of spectral features to calibrate a spectrograph. The primary advantage of such a calibration is that any broad-spectrum light source such as the sky or an incandescent bulb is suitable. This method allows for calibration using the full optical pathway of the instrument instead of incorporating separate calibration equipment that may introduce errors. This paper focuses on the potential for color calibration in the field of radial velocity astronomy, in which instruments must be finely calibrated for long periods of time to detect tiny Doppler wavelength shifts. This method is not restricted to radial velocity, however, and may find application in any field requiring calibrated spectrometers such as sea water analysis, cellular biology, chemistry, atmospheric studies, and so on. This paper demonstrates that color sensors have the potential to provide calibration with greatly reduced complexity.

  9. A solid colorimetric sensor for the analysis of amphetamine-like street samples.

    PubMed

    Argente-García, A; Jornet-Martínez, N; Herráez-Hernández, R; Campíns-Falcó, P

    2016-11-02

    A solid sensor obtained by embedding 1,2-naphthoquinone-4-sulfonate (NQS) into polydimethylsiloxane/tetraethylortosilicate/silicon dioxide nanoparticles composite has been developed to identify and determine amphetamine (AMP), methamphetamine (MAMP), 3,4-methylenedioxymetamphetamine (MDMA) and 3,4-methylenedioxyamphetamine (MDA). The analytes are derivatized inside the composite for 10 min to create a colored product which can be then quantified by measuring the diffuse reflectance or the color intensity after processing the digitalized image. Satisfactory limits of detection (0.002-0.005 g mL -1 ) and relative standard deviations (<10%) have been achieved. The proposed kit has been successfully validated and applied to the analysis of amphetamine-like drugs street samples. The kit allows the in-situ screening of the mentioned illicit drugs owing to its simplicity, rapidity and portability, with excellent sensor stability and at a very low-cost. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  11. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  12. G-Channel Restoration for RWB CFA with Double-Exposed W Channel

    PubMed Central

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-01-01

    In this paper, we propose a green (G)-channel restoration for a red–white–blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red–blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels. PMID:28165425

  13. G-Channel Restoration for RWB CFA with Double-Exposed W Channel.

    PubMed

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-02-05

    In this paper, we propose a green (G)-channel restoration for a red-white-blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red-blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels.

  14. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018

  15. A coded structured light system based on primary color stripe projection and monochrome imaging.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  16. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification.

    PubMed

    Zhou, Tao; Li, Zhaofu; Pan, Jianjun

    2018-01-27

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.

  17. Genetically encoded calcium indicators for multi-color neural activity imaging and combination with optogenetics

    PubMed Central

    Akerboom, Jasper; Carreras Calderón, Nicole; Tian, Lin; Wabnig, Sebastian; Prigge, Matthias; Tolö, Johan; Gordus, Andrew; Orger, Michael B.; Severi, Kristen E.; Macklin, John J.; Patel, Ronak; Pulver, Stefan R.; Wardill, Trevor J.; Fischer, Elisabeth; Schüler, Christina; Chen, Tsai-Wen; Sarkisyan, Karen S.; Marvin, Jonathan S.; Bargmann, Cornelia I.; Kim, Douglas S.; Kügler, Sebastian; Lagnado, Leon; Hegemann, Peter; Gottschalk, Alexander; Schreiter, Eric R.; Looger, Loren L.

    2013-01-01

    Genetically encoded calcium indicators (GECIs) are powerful tools for systems neuroscience. Here we describe red, single-wavelength GECIs, “RCaMPs,” engineered from circular permutation of the thermostable red fluorescent protein mRuby. High-resolution crystal structures of mRuby, the red sensor RCaMP, and the recently published red GECI R-GECO1 give insight into the chromophore environments of the Ca2+-bound state of the sensors and the engineered protein domain interfaces of the different indicators. We characterized the biophysical properties and performance of RCaMP sensors in vitro and in vivo in Caenorhabditis elegans, Drosophila larvae, and larval zebrafish. Further, we demonstrate 2-color calcium imaging both within the same cell (registering mitochondrial and somatic [Ca2+]) and between two populations of cells: neurons and astrocytes. Finally, we perform integrated optogenetics experiments, wherein neural activation via channelrhodopsin-2 (ChR2) or a red-shifted variant, and activity imaging via RCaMP or GCaMP, are conducted simultaneously, with the ChR2/RCaMP pair providing independently addressable spectral channels. Using this paradigm, we measure calcium responses of naturalistic and ChR2-evoked muscle contractions in vivo in crawling C. elegans. We systematically compare the RCaMP sensors to R-GECO1, in terms of action potential-evoked fluorescence increases in neurons, photobleaching, and photoswitching. R-GECO1 displays higher Ca2+ affinity and larger dynamic range than RCaMP, but exhibits significant photoactivation with blue and green light, suggesting that integrated channelrhodopsin-based optogenetics using R-GECO1 may be subject to artifact. Finally, we create and test blue, cyan, and yellow variants engineered from GCaMP by rational design. This engineered set of chromatic variants facilitates new experiments in functional imaging and optogenetics. PMID:23459413

  18. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  19. Novel sensor for color control in solid state lighting applications

    NASA Astrophysics Data System (ADS)

    Gourevitch, Alex; Thurston, Thomas; Singh, Rajiv; Banachowicz, Bartosz; Korobov, Vladimir; Drowley, Cliff

    2010-02-01

    LED wavelength and luminosity shifts due to temperature, dimming, aging, and binning uncertainty can cause large color errors in open-loop light-mixing illuminators. Multispectral color light sensors combined with feedback circuits can compensate for these LED shifts. Typical color light sensor design variables include the choice of light-sensing material, filter configuration, and read-out circuitry. Cypress Semiconductor has designed and prototyped a color sensor chip that consists of photodiode arrays connected to a I/F (Current to Frequency) converter. This architecture has been chosen to achieve high dynamic range (~100dB) and provide flexibility for tailoring sensor response. Several different optical filter configurations were evaluated in this prototype. The color-sensor chip was incorporated into an RGB light color mixing system with closed-loop optical feedback. Color mixing accuracy was determined by calculating the difference between (u',v') set point values and CIE coordinates measured with a reference colorimeter. A typical color precision ▵u'v' less than 0.0055 has been demonstrated over a wide range of colors, a temperature range of 50C, and light dimming up to 80%.

  20. Hyperspectral image reconstruction using RGB color for foodborne pathogen detection on agar plates

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Park, Bosoon; Lawrence, Kurt C.; Heitschmidt, Gerald W.

    2014-03-01

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance spectra measured in the visible and near-infrared spectral range from 400 and 1,000 nm (473 narrow spectral bands). Multivariate regression methods were used to estimate and predict hyperspectral data from RGB color values. The six representative non-O157 Shiga-toxin producing Eschetichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) were grown on Rainbow agar plates. A line-scan pushbroom hyperspectral image sensor was used to scan 36 agar plates grown with pure STEC colonies at each plate. The 36 hyperspectral images of the agar plates were divided in half to create training and test sets. The mean Rsquared value for hyperspectral image estimation was about 0.98 in the spectral range between 400 and 700 nm for linear, quadratic and cubic polynomial regression models and the detection accuracy of the hyperspectral image classification model with the principal component analysis and k-nearest neighbors for the test set was up to 92% (99% with the original hyperspectral images). Thus, the results of the study suggested that color-based detection may be viable as a multispectral imaging solution without much loss of prediction accuracy compared to hyperspectral imaging.

  1. An approach for combining airborne LiDAR and high-resolution aerial color imagery using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Liu, Yansong; Monteiro, Sildomar T.; Saber, Eli

    2015-10-01

    Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.

  2. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  3. A one-step colorimetric acid-base titration sensor using a complementary color changing coordination system.

    PubMed

    Cho, Hui Hun; Kim, Si Hyun; Heo, Jun Hyuk; Moon, Young Eel; Choi, Young Hun; Lim, Dong Cheol; Han, Kwon-Hoon; Lee, Jung Heon

    2016-06-21

    We report the development of a colorimetric sensor that allows for the quantitative measurement of the acid content via acid-base titration in a single-step. In order to create the sensor, we used a cobalt coordination system (Co-complex sensor) that changes from greenish blue colored Co(H2O)4(OH)2 to pink colored Co(H2O)6(2+) after neutralization. Greenish blue and pink are two complementary colors with a strong contrast. As a certain amount of acid is introduced to the Co-complex sensor, a portion of greenish blue colored Co(H2O)4(OH)2 changes to pink colored Co(H2O)6(2+), producing a different color. As the ratio of greenish blue and pink in the Co-complex sensor is determined by the amount of neutralization reaction occurring between Co(H2O)4(OH)2 and an acid, the sensor produced a spectrum of green, yellow green, brown, orange, and pink colors depending on the acid content. In contrast, the color change appeared only beyond the end point for normal acid-base titration. When we mixed this Co-complex sensor with different concentrations of citric acid, tartaric acid, and malic acid, three representative organic acids in fruits, we observed distinct color changes for each sample. This color change could also be observed in real fruit juice. When we treated the Co-complex sensor with real tangerine juice, it generated diverse colors depending on the concentration of citric acid in each sample. These results provide a new angle on simple but quantitative measurements of analytes for on-site usage in various applications, such as in food, farms, and the drug industry.

  4. Remote sensing of the diffuse attenuation coefficient of ocean water. [coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Austin, R. W.

    1981-01-01

    A technique was devised which uses remotely sensed spectral radiances from the sea to assess the optical diffuse attenuation coefficient, K (lambda) of near-surface ocean water. With spectral image data from a sensor such as the coastal zone color scanner (CZCS) carried on NIMBUS-7, it is possible to rapidly compute the K (lambda) fields for large ocean areas and obtain K "images" which show synoptic, spatial distribution of this attenuation coefficient. The technique utilizes a relationship that has been determined between the value of K and the ratio of the upwelling radiances leaving the sea surface at two wavelengths. The relationship was developed to provide an algorithm for inferring K from the radiance images obtained by the CZCS, thus the wavelengths were selected from those used by this sensor, viz., 443, 520, 550 and 670 nm. The majority of the radiance arriving at the spacecraft is the result of scattering in the atmospheric and is unrelated to the radiance signal generated by the water. A necessary step in the processing of the data received by the sensor is, therefore, the effective removal of these atmospheric path radiance signals before the K algorithm is applied. Examples of the efficacy of these removal techniques are given together with examples of the spatial distributions of K in several ocean areas.

  5. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  6. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  7. Study on real-time images compounded using spatial light modulator

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang

    2007-01-01

    Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.

  8. Ocean color products from the Korean Geostationary Ocean Color Imager (GOCI).

    PubMed

    Wang, Menghua; Ahn, Jae-Hyun; Jiang, Lide; Shi, Wei; Son, SeungHyun; Park, Young-Je; Ryu, Joo-Hyung

    2013-02-11

    The first geostationary ocean color satellite sensor, Geostationary Ocean Color Imager (GOCI), which is onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS), was successfully launched in June of 2010. GOCI has a local area coverage of the western Pacific region centered at around 36°N and 130°E and covers ~2500 × 2500 km(2). GOCI has eight spectral bands from 412 to 865 nm with an hourly measurement during daytime from 9:00 to 16:00 local time, i.e., eight images per day. In a collaboration between NOAA Center for Satellite Applications and Research (STAR) and Korea Institute of Ocean Science and Technology (KIOST), we have been working on deriving and improving GOCI ocean color products, e.g., normalized water-leaving radiance spectra (nLw(λ)), chlorophyll-a concentration, diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)), etc. The GOCI-covered ocean region includes one of the world's most turbid and optically complex waters. To improve the GOCI-derived nLw(λ) spectra, a new atmospheric correction algorithm was developed and implemented in the GOCI ocean color data processing. The new algorithm was developed specifically for GOCI-like ocean color data processing for this highly turbid western Pacific region. In this paper, we show GOCI ocean color results from our collaboration effort. From in situ validation analyses, ocean color products derived from the new GOCI ocean color data processing have been significantly improved. Generally, the new GOCI ocean color products have a comparable data quality as those from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua. We show that GOCI-derived ocean color data can provide an effective tool to monitor ocean phenomenon in the region such as tide-induced re-suspension of sediments, diurnal variation of ocean optical and biogeochemical properties, and horizontal advection of river discharge. In particular, we show some examples of ocean diurnal variations in the region, which can be provided effectively from satellite geostationary measurements.

  9. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signalsmore » for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors’ method based on the use of a commercially available color camera is useful to evaluate and understand the display performances of both monochrome and color LCDs in radiology departments.« less

  10. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  11. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  12. Aerosol polarization effects on atmospheric correction and aerosol retrievals in ocean color remote sensing.

    PubMed

    Wang, Menghua

    2006-12-10

    The current ocean color data processing system for the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) and the moderate resolution imaging spectroradiometer (MODIS) uses the Rayleigh lookup tables that were generated using the vector radiative transfer theory with inclusion of the polarization effects. The polarization effects, however, are not accounted for in the aerosol lookup tables for the ocean color data processing. I describe a study of the aerosol polarization effects on the atmospheric correction and aerosol retrieval algorithms in the ocean color remote sensing. Using an efficient method for the multiple vector radiative transfer computations, aerosol lookup tables that include polarization effects are generated. Simulations have been carried out to evaluate the aerosol polarization effects on the derived ocean color and aerosol products for all possible solar-sensor geometries and the various aerosol optical properties. Furthermore, the new aerosol lookup tables have been implemented in the SeaWiFS data processing system and extensively tested and evaluated with SeaWiFS regional and global measurements. Results show that in open oceans (maritime environment), the aerosol polarization effects on the ocean color and aerosol products are usually negligible, while there are some noticeable effects on the derived products in the coastal regions with nonmaritime aerosols.

  13. A complete passive blind image copy-move forensics scheme based on compound statistics features.

    PubMed

    Peng, Fei; Nie, Yun-ying; Long, Min

    2011-10-10

    Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Feasibility of using a bacteriophage-based structural color sensor for screening the geographical origins of agricultural products

    NASA Astrophysics Data System (ADS)

    Seol, Daun; Moon, Jong-Sik; Lee, Yujin; Han, Jiye; Jang, Daeil; Kang, Dong-Jin; Moon, Jiyoung; Jang, Eunjin; Oh, Jin-Woo; Chung, Hoeil

    2018-05-01

    An M13 bacteriophage-based color sensor, which can change its structural color upon interaction with a gaseous molecule, was evaluated as a screening tool for the discrimination of the geographical origins of three different agricultural products (garlic, onion, and perilla). Exposure of the color sensor to sample odors induced the self-assembled M13 bacteriophage bundles to swell by the interaction of amino acid residues (repeating units of four glutamates) on the bacteriophage with the odor components, resulting in a change in the structural color of the sensor. When the sensor was exposed to the odors of garlic and onion samples, the RGB color changes were considerable because of the strong interactions of the odor components such as disulfides with the glutamate residues on the sensor. Although the patterns of the color variations were generally similar between the domestic and imported samples, some degrees of dissimilarities in their intensities were also observed. Although the magnitude of color change decreased for perilla, the color change patterns between the two groups were somewhat different. With the acquired RGB data, a support vector machine was employed to distinguish the domestic and imported samples, and the resulting accuracies in the measurements of garlic, onion, and perilla samples were 94.1, 88.7, and 91.6%, respectively. The differences in the concentrations of the odor components between both groups and/or the presence of specific components exclusively in the odor of one group allowed the color sensor-based discrimination. The demonstrated color sensor was thus shown to be a potentially versatile and simple as an on-site screening tool. Strategies able to further improve the sensor performance were also discussed.

  15. Feasibility of using a bacteriophage-based structural color sensor for screening the geographical origins of agricultural products.

    PubMed

    Seol, Daun; Moon, Jong-Sik; Lee, Yujin; Han, Jiye; Jang, Daeil; Kang, Dong-Jin; Moon, Jiyoung; Jang, Eunjin; Oh, Jin-Woo; Chung, Hoeil

    2018-05-15

    An M13 bacteriophage-based color sensor, which can change its structural color upon interaction with a gaseous molecule, was evaluated as a screening tool for the discrimination of the geographical origins of three different agricultural products (garlic, onion, and perilla). Exposure of the color sensor to sample odors induced the self-assembled M13 bacteriophage bundles to swell by the interaction of amino acid residues (repeating units of four glutamates) on the bacteriophage with the odor components, resulting in a change in the structural color of the sensor. When the sensor was exposed to the odors of garlic and onion samples, the RGB color changes were considerable because of the strong interactions of the odor components such as disulfides with the glutamate residues on the sensor. Although the patterns of the color variations were generally similar between the domestic and imported samples, some degrees of dissimilarities in their intensities were also observed. Although the magnitude of color change decreased for perilla, the color change patterns between the two groups were somewhat different. With the acquired RGB data, a support vector machine was employed to distinguish the domestic and imported samples, and the resulting accuracies in the measurements of garlic, onion, and perilla samples were 94.1, 88.7, and 91.6%, respectively. The differences in the concentrations of the odor components between both groups and/or the presence of specific components exclusively in the odor of one group allowed the color sensor-based discrimination. The demonstrated color sensor was thus shown to be a potentially versatile and simple as an on-site screening tool. Strategies able to further improve the sensor performance were also discussed. Copyright © 2018. Published by Elsevier B.V.

  16. LEA Detection and Tracking Method for Color-Independent Visual-MIMO

    PubMed Central

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-01-01

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563

  17. LEA Detection and Tracking Method for Color-Independent Visual-MIMO.

    PubMed

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-07-02

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.

  18. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  19. High-resolution CCD imaging alternatives

    NASA Astrophysics Data System (ADS)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  20. Remotely sensed geology from lander-based to orbital perspectives: Results of FIDO rover May 2000 field tests

    USGS Publications Warehouse

    Jolliff, B.; Knoll, A.; Morris, R.V.; Moersch, J.; McSween, H.; Gilmore, M.; Arvidson, R.; Greeley, R.; Herkenhoff, K.; Squyres, S.

    2002-01-01

    Blind field tests of the Field Integration Design and Operations (FIDO) prototype Mars rover were carried out 7-16 May 2000. A Core Operations Team (COT), sequestered at the Jet Propulsion Laboratory without knowledge of test site location, prepared command sequences and interpreted data acquired by the rover. Instrument sensors included a stereo panoramic camera, navigational and hazard-avoidance cameras, a color microscopic imager, an infrared point spectrometer, and a rock coring drill. The COT designed command sequences, which were relayed by satellite uplink to the rover, and evaluated instrument data. Using aerial photos and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, and information from the rover sensors, the COT inferred the geology of the landing site during the 18 sol mission, including lithologic diversity, stratigraphic relationships, environments of deposition, and weathering characteristics. Prominent lithologic units were interpreted to be dolomite-bearing rocks, kaolinite-bearing altered felsic volcanic materials, and basalt. The color panoramic camera revealed sedimentary layering and rock textures, and geologic relationships seen in rock exposures. The infrared point spectrometer permitted identification of prominent carbonate and kaolinite spectral features and permitted correlations to outcrops that could not be reached by the rover. The color microscopic imager revealed fine-scale rock textures, soil components, and results of coring experiments. Test results show that close-up interrogation of rocks is essential to investigations of geologic environments and that observations must include scales ranging from individual boulders and outcrops (microscopic, macroscopic) to orbital remote sensing, with sufficient intermediate steps (descent images) to connect in situ and remote observations.

  1. Chromatic Modulator for a High-Resolution CCD or APS

    NASA Technical Reports Server (NTRS)

    Hartley, Frank; Hull, Anthony

    2008-01-01

    A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.

  2. Analysis of Active Sensor Discrimination Requirements for Various Defense Missile Defense Scenarios Final Report 1999(99-ERD-080)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ledebuhr, A.G.; Ng, L.C.; Gaughan, R.J.

    2000-02-15

    During FY99, we have explored and analyzed a combined passive/active sensor concept to support the advanced discrimination requirements for various missile defense scenario. The idea is to combine multiple IR spectral channels with an imaging LIDAR (Light Detection and Ranging) behind a common optical system. The imaging LIDAR would itself consist of at least two channels; one at the fundamental laser wavelength (e.g., the 1.064 {micro}m for Nd:YAG) and one channel at the frequency doubled (at 532 nm for Nd:YAG). two-color laser output would, for example, allow the longer wavelength for a direct detection time of flight ranger and anmore » active imaging channel at the shorter wavelength. The LIDAR can function as a high-resolution 2D spatial image either passively or actively with laser illumination. Advances in laser design also offer three color (frequency tripled) systems, high rep-rate operation, better pumping efficiencies that can provide longer distance acquisition, and ranging for enhanced discrimination phenomenology. New detector developments can enhance the performance and operation of both LIDAR channels. A real time data fusion approach that combines multi-spectral IR phenomenology with LIDAR imagery can improve both discrimination and aim-point selection capability.« less

  3. Comparison of Landsat MSS and merged MSS/RBV data for analysis of natural vegetation

    NASA Technical Reports Server (NTRS)

    Roller, N. E. G.; Cox, S.

    1980-01-01

    Improved resolution could make satellite remote sensing data more useful for surveys of natural vegetation. Although improved satellite/sensor systems appear to be several years away, one potential interim solution to the problem of achieving greater resolution without sacrificing spectral sensitivity is through the merging of Landsat RBV and MSS data. This paper describes the results of a study performed to obtain a preliminary evaluation of the usefulness of two types of products that can be made by merging Landsat RBV and MSS data. The products generated were a false color composite image and a computer recognition map. Of these two products, the false color composite image appears to be the most useful.

  4. Application of EREP imagery to fracture-related mine safety hazards in coal mining and mining-environmental problems in Indiana. [Indiana and Illinois

    NASA Technical Reports Server (NTRS)

    Wier, C. E. (Principal Investigator); Powell, R. L.; Amato, R. V.; Russell, O. R.; Martin, K. R.

    1975-01-01

    The author has identified the following significant results. This investigation evaluated the applicability of a variety of sensor types, formats, and resolution capabilities to the study of both fuel and nonfuel mined lands. The image reinforcement provided by stereo viewing of the EREP images proved useful for identifying lineaments and for mined lands mapping. Skylab S190B color and color infrared transparencies were the most useful EREP imagery. New information on lineament and fracture patterns in the bedrock of Indiana and Illinois extracted from analysis of the Skylab imagery has contributed to furthering the geological understanding of this portion of the Illinois basin.

  5. Image-based tracking system for vibration measurement of a rotating object using a laser scanning vibrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dongkyu, E-mail: akein@gist.ac.kr; Khalil, Hossam; Jo, Youngjoon

    2016-06-28

    An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.

  6. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  7. Research on multi-source image fusion technology in haze environment

    NASA Astrophysics Data System (ADS)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  8. Identifying People with Soft-Biometrics at Fleet Week

    DTIC Science & Technology

    2013-03-01

    onboard sensors. This included:  Color Camera: Located in the right eye, Octavia stored 640x480 RGB images at ~4 Hz from a Point Grey Firefly camera. A...Face Detection The Fleet Week experiments demonstrated the potential of soft biometrics for recognition, but all of the existing algorithms currently

  9. Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR

    NASA Astrophysics Data System (ADS)

    Sidorchuk, D.; Volkov, V.; Gladilin, S.

    2018-04-01

    This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.

  10. The Remote Sensing of Mineral Aerosols and Their Impact on Phytoplankton Productivity using Sea WiFS

    NASA Technical Reports Server (NTRS)

    Stegmann, Petra M.

    1998-01-01

    The main objective of this proposal was to use SeaWiFs data to study the relationship between aerosols found in aeollan dust and photosynthesis of phytoplankton in open ocean surface waters. This project was a collaborative effort between myself and Dr. Neil Tindale at Texas A&M University and followed on our earlier funded proposal which had been designed as a proof-of-concept study to determine if ocean color sensors such as the Coastal Zone Color Scanner (CZCS) could be used to detect and map large-scale mineral aerosol plumes. Despite the large spatial and temporal gaps inherent in the CZCS data coverage, our results from this initial study indicated that an ocean color sensor could indeed be used to detect aerosols. These encouraging results led us to propose in this proposal the use of SeaWiFS data to study mineral aerosol transport and its impact on phytoplankton production. This proposal orignally intended to make use of SeaWiFS images, but as the launch delay of SeaWiFS dragged on, we had to make do with other satellite data sets. Thus, the focus of this proposal became the CSCS image archive instead. I detail my results and accomplishments with this data set.

  11. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board

    PubMed Central

    Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results. PMID:24643005

  12. The National Map - Orthoimagery

    USGS Publications Warehouse

    Mauck, James; Brown, Kim; Carswell, William J.

    2009-01-01

    Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.

  13. Imaging Detonations of Explosives

    DTIC Science & Technology

    2016-04-01

    made using a full-color single-camera pyrometer where wavelength resolution is achieved using the Bayer-type mask covering the sensor chip17 and a...many CHNO- based explosives (e.g., TNT [C7H5N3O6], the formulation C-4 [92% RDX, C3H6N6O6]), hot detonation products are mainly soot and permanent...unreferenced). Essentially, 2 light sensors (cameras), each filtered over a narrow wavelength region, observe an event over the same line of sight. The

  14. In vitro and in vivo comparison of optics and performance of a distal sensor ureteroscope versus a standard fiberoptic ureteroscope.

    PubMed

    Lusch, Achim; Abdelshehid, Corollos; Hidas, Guy; Osann, Kathryn E; Okhunov, Zhamshid; McDougall, Elspeth; Landman, Jaime

    2013-07-01

    Recent advances in distal sensor technologies have made distal sensor ureteroscopes both commercially and technically feasible. We evaluated performance characteristics and optics of a new generation distal sensor Flex-X(C) (X(C)) and a standard flexible fiberoptic ureteroscope Flex-X(2) (X(2)), both from Karl Storz, Tuttlingen, Germany. The ureteroscopes were compared for active deflection, irrigation flow, and optical characteristics. Each ureteroscope was evaluated with an empty working channel and with various accessories. Optical characteristics (resolution, grayscale imaging, and color representation) were measured using United States Air Force test targets. We digitally recorded a renal porcine ureteroscopy and laser ablation of a stone with the X(2) and with the X(C). Edited footage of the recorded procedure was shown to different expert surgeons (n=8) on a high-definition monitor for evaluation by questionnaire for image quality and performance. The X(C) had a higher resolution than the X(2) at 20 and 10 mm 3.17 lines/mm vs 1.41 lines/mm, 10.1 vs 3.56, respectively (P=0.003, P=0.002). Color representation was better in the X(C). There was no difference in contrast quality between the two ureteroscopes. For each individual ureteroscope, the upward deflection was greater than the downward deflection both with and without accessories. When compared with the X(2), the X(C) manifested superior deflection and flow (P<0.0005, P<0.05) with and without accessory present in the working channel. Observers deemed the distal sensor ureteroscope superior in visualization in clear and bloody fields, as well as for illumination (P=0.0005, P=0.002, P=0.0125). In this in vitro and porcine evaluation, the distal sensor ureteroscope provided significantly improved resolution, color representation, and visualization in the upper urinary tract compared with a standard fiberoptic ureteroscope. The overall deflection was also better in the X(C), and deflection as well as flow rate was less impaired by the various accessories.

  15. High-resolution Imaging of pH in Alkaline Sediments and Water Based on a New Rapid Response Fluorescent Planar Optode

    NASA Astrophysics Data System (ADS)

    Han, Chao; Yao, Lei; Xu, Di; Xie, Xianchuan; Zhang, Chaosheng

    2016-05-01

    A new dual-lumophore optical sensor combined with a robust RGB referencing method was developed for two-dimensional (2D) pH imaging in alkaline sediments and water. The pH sensor film consisted of a proton-permeable polymer (PVC) in which two dyes with different pH sensitivities and emission colors: (1) chloro phenyl imino propenyl aniline (CPIPA) and (2) the coumarin dye Macrolex® fluorescence yellow 10 GN (MFY-10 GN) were entrapped. Calibration experiments revealed the typical sigmoid function and temperature dependencies. This sensor featured high sensitivity and fast response over the alkaline working ranges from pH 7.5 to pH 10.5. Cross-sensitivity towards ionic strength (IS) was found to be negligible for freshwater when IS <0.1 M. The sensor had a spatial resolution of approximately 22 μm and aresponse time of <120 s when going from pH 7.0 to 9.0. The feasibility of the sensor was demonstrated using the pH microelectrode. An example of pH image obtained in the natrual freshwater sediment and water associated with the photosynthesis of Vallisneria spiral species was also presented, suggesting that the sensor held great promise for the field applications.

  16. High-resolution Imaging of pH in Alkaline Sediments and Water Based on a New Rapid Response Fluorescent Planar Optode

    PubMed Central

    Han, Chao; Yao, Lei; Xu, Di; Xie, Xianchuan; Zhang, Chaosheng

    2016-01-01

    A new dual-lumophore optical sensor combined with a robust RGB referencing method was developed for two-dimensional (2D) pH imaging in alkaline sediments and water. The pH sensor film consisted of a proton-permeable polymer (PVC) in which two dyes with different pH sensitivities and emission colors: (1) chloro phenyl imino propenyl aniline (CPIPA) and (2) the coumarin dye Macrolex® fluorescence yellow 10 GN (MFY-10 GN) were entrapped. Calibration experiments revealed the typical sigmoid function and temperature dependencies. This sensor featured high sensitivity and fast response over the alkaline working ranges from pH 7.5 to pH 10.5. Cross-sensitivity towards ionic strength (IS) was found to be negligible for freshwater when IS <0.1 M. The sensor had a spatial resolution of approximately 22 μm and aresponse time of <120 s when going from pH 7.0 to 9.0. The feasibility of the sensor was demonstrated using the pH microelectrode. An example of pH image obtained in the natrual freshwater sediment and water associated with the photosynthesis of Vallisneria spiral species was also presented, suggesting that the sensor held great promise for the field applications. PMID:27199163

  17. Quantitative analysis and temperature-induced variations of moiré pattern in fiber-coupled imaging sensors.

    PubMed

    Karbasi, Salman; Arianpour, Ashkan; Motamedi, Nojan; Mellette, William M; Ford, Joseph E

    2015-06-10

    Imaging fiber bundles can map the curved image surface formed by some high-performance lenses onto flat focal plane detectors. The relative alignment between the focal plane array pixels and the quasi-periodic fiber-bundle cores can impose an undesirable space variant moiré pattern, but this effect may be greatly reduced by flat-field calibration, provided that the local responsivity is known. Here we demonstrate a stable metric for spatial analysis of the moiré pattern strength, and use it to quantify the effect of relative sensor and fiber-bundle pitch, and that of the Bayer color filter. We measure the thermal dependence of the moiré pattern, and the achievable improvement by flat-field calibration at different operating temperatures. We show that a flat-field calibration image at a desired operating temperature can be generated using linear interpolation between white images at several fixed temperatures, comparing the final image quality with an experimentally acquired image at the same temperature.

  18. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  19. Human grasping database for activities of daily living with depth, color and kinematic data streams.

    PubMed

    Saudabayev, Artur; Rysbek, Zhanibek; Khassenova, Raykhan; Varol, Huseyin Atakan

    2018-05-29

    This paper presents a grasping database collected from multiple human subjects for activities of daily living in unstructured environments. The main strength of this database is the use of three different sensing modalities: color images from a head-mounted action camera, distance data from a depth sensor on the dominant arm and upper body kinematic data acquired from an inertial motion capture suit. 3826 grasps were identified in the data collected during 9-hours of experiments. The grasps were grouped according to a hierarchical taxonomy into 35 different grasp types. The database contains information related to each grasp and associated sensor data acquired from the three sensor modalities. We also provide our data annotation software written in Matlab as an open-source tool. The size of the database is 172 GB. We believe this database can be used as a stepping stone to develop big data and machine learning techniques for grasping and manipulation with potential applications in rehabilitation robotics and intelligent automation.

  20. IR CMOS: near infrared enhanced digital imaging (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Pralle, Martin U.; Carey, James E.; Joy, Thomas; Vineis, Chris J.; Palsule, Chintamani

    2015-08-01

    SiOnyx has demonstrated imaging at light levels below 1 mLux (moonless starlight) at video frame rates with a 720P CMOS image sensor in a compact, low latency camera. Low light imaging is enabled by the combination of enhanced quantum efficiency in the near infrared together with state of the art low noise image sensor design. The quantum efficiency enhancements are achieved by applying Black Silicon, SiOnyx's proprietary ultrafast laser semiconductor processing technology. In the near infrared, silicon's native indirect bandgap results in low absorption coefficients and long absorption lengths. The Black Silicon nanostructured layer fundamentally disrupts this paradigm by enhancing the absorption of light within a thin pixel layer making 5 microns of silicon equivalent to over 300 microns of standard silicon. This results in a demonstrate 10 fold improvements in near infrared sensitivity over incumbent imaging technology while maintaining complete compatibility with standard CMOS image sensor process flows. Applications include surveillance, nightvision, and 1064nm laser see spot. Imaging performance metrics will be discussed. Demonstrated performance characteristics: Pixel size : 5.6 and 10 um Array size: 720P/1.3Mpix Frame rate: 60 Hz Read noise: 2 ele/pixel Spectral sensitivity: 400 to 1200 nm (with 10x QE at 1064nm) Daytime imaging: color (Bayer pattern) Nighttime imaging: moonless starlight conditions 1064nm laser imaging: daytime imaging out to 2Km

  1. Non-destructive Phenotyping of Lettuce Plants in Early Stages of Development with Optical Sensors

    PubMed Central

    Simko, Ivan; Hayes, Ryan J.; Furbank, Robert T.

    2016-01-01

    Rapid development of plants is important for the production of ‘baby-leaf’ lettuce that is harvested when plants reach the four- to eight-leaf stage of growth. However, environmental factors, such as high or low temperature, or elevated concentrations of salt, inhibit lettuce growth. Therefore, non-destructive evaluations of plants can provide valuable information to breeders and growers. The objective of the present study was to test the feasibility of using non-destructive phenotyping with optical sensors for the evaluations of lettuce plants in early stages of development. We performed the series of experiments to determine if hyperspectral imaging and chlorophyll fluorescence imaging can determine phenotypic changes manifested on lettuce plants subjected to the extreme temperature and salinity stress treatments. Our results indicate that top view optical sensors alone can accurately determine plant size to approximately 7 g fresh weight. Hyperspectral imaging analysis was able to detect changes in the total chlorophyll (RCC) and anthocyanin (RAC) content, while chlorophyll fluorescence imaging revealed photoinhibition and reduction of plant growth caused by the extreme growing temperatures (3 and 39°C) and salinity (100 mM NaCl). Though no significant correlation was found between Fv/Fm and decrease in plant growth due to stress when comparisons were made across multiple accessions, our results indicate that lettuce plants have a high adaptability to both low (3°C) and high (39°C) temperatures, with no permanent damage to photosynthetic apparatus and fast recovery of plants after moving them to the optimal (21°C) temperature. We have also detected a strong relationship between visual rating of the green- and red-leaf color intensity and RCC and RAC, respectively. Differences in RAC among accessions suggest that the selection for intense red color may be easier to perform at somewhat lower than the optimal temperature. This study serves as a proof of concept that optical sensors can be successfully used as tools for breeders when evaluating young lettuce plants. Moreover, we were able to identify the locus for light green leaf color (qLG4), and position this locus on the molecular linkage map of lettuce, which shows that these techniques have sufficient resolution to be used in a genetic context in lettuce. PMID:28083011

  2. Visual enhancement of unmixed multispectral imagery using adaptive smoothing

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2004-01-01

    Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.

  3. New space sensor and mesoscale data analysis

    NASA Technical Reports Server (NTRS)

    Hickey, John S.

    1987-01-01

    The developed Earth Science and Application Division (ESAD) system/software provides the research scientist with the following capabilities: an extensive data base management capibility to convert various experiment data types into a standard format; and interactive analysis and display package (AVE80); an interactive imaging/color graphics capability utilizing the Apple III and IBM PC workstations integrated into the ESAD computer system; and local and remote smart-terminal capability which provides color video, graphics, and Laserjet output. Recommendations for updating and enhancing the performance of the ESAD computer system are listed.

  4. The Hyperspectral Imager for the Coastal Ocean (HICO (trademark)) Provides a New View of the Coastal Ocean

    DTIC Science & Technology

    2012-02-09

    The calibrated data are then sent to NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS...hyperspectral sensor in space we have not previously developed automated processing for hyperspectral ocean color data. The hyperspectral processing branch

  5. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  6. Algorithm Science to Operations for the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Visible/Infrared Imager/Radiometer Suite (VIIRS)

    NASA Technical Reports Server (NTRS)

    Duda, James L.; Barth, Suzanna C

    2005-01-01

    The VIIRS sensor provides measurements for 22 Environmental Data Records (EDRs) addressing the atmosphere, ocean surface temperature, ocean color, land parameters, aerosols, imaging for clouds and ice, and more. That is, the VIIRS collects visible and infrared radiometric data of the Earth's atmosphere, ocean, and land surfaces. Data types include atmospheric, clouds, Earth radiation budget, land/water and sea surface temperature, ocean color, and low light imagery. This wide scope of measurements calls for the preparation of a multiplicity of Algorithm Theoretical Basis Documents (ATBDs), and, additionally, for intermediate products such as cloud mask, et al. Furthermore, the VIIRS interacts with three or more other sensors. This paper addresses selected and crucial elements of the process being used to convert and test an immense volume of a maturing and changing science code to the initial operational source code in preparation for launch of NPP. The integrity of the original science code is maintained and enhanced via baseline comparisons when re-hosted, in addition to multiple planned code performance reviews.

  7. Regularization destriping of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  8. Practical holography III; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Astrophysics Data System (ADS)

    Benton, Stephen A.

    Various papers on practical holography are presented. Individual topics addressed include: design of large format commercial display holograms, design of a one-step full-color holographic recording system, color reflection holography, full color rainbow hologram using a photoresist plate, secondary effects in processing holograms, archival properties of holograms, survey of properties of volume holographic materials, image stability of DMP-128 holograms, activation monitor for DMP-128, microwave drying effects on dichromated gelatin holograms, sensitization process of dichromated gelatin, holographic optics for vision systems, holographic fingerprint sensor, cross-talk and cross-coupling in multiplexed holographic gratings, compact illuminators for transmission holograms, solar holoconcentrators in dichromated grains, three-dimensional display of scientific data, holographic liquid crystal displays, in situ swelling for hologaphic color control.

  9. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  10. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification

    PubMed Central

    Pan, Jianjun

    2018-01-01

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively. PMID:29382073

  11. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  12. Application of 3-D imaging sensor for tracking minipigs in the open field test.

    PubMed

    Kulikov, Victor A; Khotskin, Nikita V; Nikitin, Sergey V; Lankin, Vasily S; Kulikov, Alexander V; Trapezov, Oleg V

    2014-09-30

    The minipig is a promising model in neurobiology and psychopharmacology. However, automated tracking of minipig behavior is still unresolved problem. The study was carried out on white, agouti and black (or spotted) minipiglets (n=108) bred in the Institute of Cytology and Genetics. New method of automated tracking of minipig behavior is based on Microsoft Kinect 3-D image sensor and the 3-D image reconstruction with EthoStudio software. The algorithms of distance run and time in the center evaluation were adapted for 3-D image data and new algorithm of vertical activity quantification was developed. The 3-D imaging system successfully detects white, black, spotted and agouti pigs in the open field test (OFT). No effect of sex or color on horizontal (distance run), vertical activities and time in the center was shown. Agouti pigs explored the arena more intensive than white or black animals, respectively. The OFT behavioral traits were compared with the fear reaction to experimenter. Time in the center of the OFT was positively correlated with fear reaction rank (ρ=0.21, p<0.05). Black pigs were significantly more fearful compared with white or agouti animals. The 3-D imaging system has three advantages over existing automated tracking systems: it avoids perspective distortion, distinguishes animals any color from any background and automatically evaluates vertical activity. The 3-D imaging system can be successfully applied for automated measurement of minipig behavior in neurobiological and psychopharmacological experiments. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.

    2015-03-01

    RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.

  14. Colorimetric Recognition of Aldehydes and Ketones.

    PubMed

    Li, Zheng; Fang, Ming; LaGasse, Maria K; Askim, Jon R; Suslick, Kenneth S

    2017-08-07

    A colorimetric sensor array has been designed for the identification of and discrimination among aldehydes and ketones in vapor phase. Due to rapid chemical reactions between the solid-state sensor elements and gaseous analytes, distinct color difference patterns were produced and digitally imaged for chemometric analysis. The sensor array was developed from classical spot tests using aniline and phenylhydrazine dyes that enable molecular recognition of a wide variety of aliphatic or aromatic aldehydes and ketones, as demonstrated by hierarchical cluster, principal component, and support vector machine analyses. The aldehyde/ketone-specific sensors were further employed for differentiation among and identification of ten liquor samples (whiskies, brandy, vodka) and ethanol controls, showing its potential applications in the beverage industry. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  16. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    NASA Astrophysics Data System (ADS)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  17. Guinea-Bissau

    NASA Image and Video Library

    2017-12-08

    This Landsat 7 image of Guinea-Bissau, a small country in West Africa, shows the complex patterns of the country's shallow coastal waters, where silt carried by the Geba and other rivers washes out into the Atlantic Ocean. This is a false-color composite image made using infrared, red and blue wavelengths to bring out details in the silt was taken using Landsat 7's Enhanced Thematic Mapper plus (ETM+) sensor on Jan. 12, 2000. Image Credit: NASA/USGS EROS Data Center To learn more about the Landsat satellite go to: landsat.gsfc.nasa.gov/

  18. CubeSat Nighttime Earth Observations

    NASA Astrophysics Data System (ADS)

    Pack, D. W.; Hardy, B. S.; Longcore, T.

    2017-12-01

    Satellite monitoring of visible emissions at night has been established as a useful capability for environmental monitoring and mapping the global human footprint. Pioneering work using Defense Meteorological Support Program (DMSP) sensors has been followed by new work using the more capable Visible Infrared Imaging Radiometer Suite (VIIRS). Beginning in 2014, we have been investigating the ability of small visible light cameras on CubeSats to contribute to nighttime Earth science studies via point-and-stare imaging. This paper summarizes our recent research using a common suite of simple visible cameras on several AeroCube satellites to carry out nighttime observations of urban areas and natural gas flares, nighttime weather (including lighting), and fishing fleet lights. Example results include: urban image examples, the utility of color imagery, urban lighting change detection, and multi-frame sequences imaging nighttime weather and large ocean areas with extensive fishing vessel lights. Our results show the potential for CubeSat sensors to improve monitoring of urban growth, light pollution, energy usage, the urban-wildland interface, the improvement of electrical power grids in developing countries, light-induced fisheries, and oil industry flare activity. In addition to orbital results, the nighttime imaging capabilities of new CubeSat sensors scheduled for launch in October 2017 are discussed.

  19. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch

    PubMed Central

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed

    2012-01-01

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043

  20. Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.

    PubMed

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim

    2012-10-22

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.

  1. Lithographically-generated 3D lamella layers and their structural color

    NASA Astrophysics Data System (ADS)

    Zhang, Sichao; Chen, Yifang; Lu, Bingrui; Liu, Jianpeng; Shao, Jinhai; Xu, Chen

    2016-04-01

    Inspired by the structural color from the multilayer nanophotonic structures in Morpho butterfly wing scales, 3D lamellae layers in dielectric polymers (polymethyl methacrylate, PMMA) with n ~ 1.5 were designed and fabricated by standard top-down electron beam lithography with one-step exposure followed by an alternating development/dissolution process of PMMA/LOR (lift-off resist) multilayers. This work offers direct proof of the structural blue/green color via lithographically-replicated PMMA/air multilayers, analogous to those in real Morpho butterfly wings. The success of nanolithography in this work for the 3D lamellae structures in dielectric polymers not only enables us to gain deeper insight into the mysterious blue color of the Morpho butterfly wings, but also breaks through the bottleneck in technical development toward broad applications in gas/liquid sensors, 3D meta-materials, coloring media, and infrared imaging devices, etc.

  2. A manual for inexpensive methods of analyzing and utilizing remote sensor data

    NASA Technical Reports Server (NTRS)

    Elifrits, C. D.; Barr, D. J.

    1978-01-01

    Instructions are provided for inexpensive methods of using remote sensor data to assist in the completion of the need to observe the earth's surface. When possible, relative costs were included. Equipment need for analysis of remote sensor data is described, and methods of use of these equipment items are included, as well as advantages and disadvantages of the use of individual items. Interpretation and analysis of stereo photos and the interpretation of typical patterns such as tone and texture, landcover, drainage, and erosional form are described. Similar treatment is given to monoscopic image interpretation, including LANDSAT MSS data. Enhancement techniques are detailed with respect to their application and simple techniques of creating an enhanced data item. Techniques described include additive and subtractive (Diazo processes) color techniques and enlargement of photos or images. Applications of these processes, including mappings of land resources, engineering soils, geology, water resources, environmental conditions, and crops and/or vegetation, are outlined.

  3. NASA COAST and OCEANIA Airborne Missions Support Ecosystem and Water Quality Research in the Coastal Zone

    NASA Technical Reports Server (NTRS)

    Guild, Liane; Kudela, Raphael; Hooker, Stanford; Morrow, John; Russell, Philip; Palacios, Sherry; Livingston, John M.; Negrey, Kendra; Torres-Perez, Juan; Broughton, Jennifer

    2014-01-01

    NASA has a continuing requirement to collect high-quality in situ data for the vicarious calibration of current and next generation ocean color satellite sensors and to validate the algorithms that use the remotely sensed observations. Recent NASA airborne missions over Monterey Bay, CA, have demonstrated novel above- and in-water measurement capabilities supporting a combined airborne sensor approach (imaging spectrometer, microradiometers, and a sun photometer). The results characterize coastal atmospheric and aquatic properties through an end-to-end assessment of image acquisition, atmospheric correction, algorithm application, plus sea-truth observations from state-of-the-art instrument systems. The primary goal is to demonstrate the following in support of calibration and validation exercises for satellite coastal ocean color products: 1) the utility of a multi-sensor airborne instrument suite to assess the bio-optical properties of coastal California, including water quality; and 2) the importance of contemporaneous atmospheric measurements to improve atmospheric correction in the coastal zone. The imaging spectrometer (Headwall) is optimized in the blue spectral domain to emphasize remote sensing of marine and freshwater ecosystems. The novel airborne instrument, Coastal Airborne In-situ Radiometers (C-AIR) provides measurements of apparent optical properties with high dynamic range and fidelity for deriving exact water leaving radiances at the land-ocean boundary, including radiometrically shallow aquatic ecosystems. Simultaneous measurements supporting empirical atmospheric correction of image data are accomplished using the Ames Airborne Tracking Sunphotometer (AATS-14). Flight operations are presented for the instrument payloads using the Center for Interdisciplinary Remotely-Piloted Aircraft Studies (CIRPAS) Twin Otter flown over Monterey Bay during the seasonal fall algal bloom in 2011 (COAST) and 2013 (OCEANIA) to support bio-optical measurements of phytoplankton for coastal zone research.

  4. Adjustments to the MODIS Terra Radiometric Calibration and Polarization Sensitivity in the 2010 Reprocessing

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard; Franz, Bryan A.

    2011-01-01

    The Moderate-Resolution Imaging Spectroradiometer (MODIS) on NASA s Earth Observing System (EOS) satellite Terra provides global coverage of top-of-atmosphere (TOA) radiances that have been successfully used for terrestrial and atmospheric research. The MODIS Terra ocean color products, however, have been compromised by an inadequate radiometric calibration at the short wavelengths. The Ocean Biology Processing Group (OBPG) at NASA has derived radiometric corrections using ocean color products from the SeaWiFS sensor as truth fields. In the R2010.0 reprocessing, these corrections have been applied to the whole mission life span of 10 years. This paper presents the corrections to the radiometric gains and to the instrument polarization sensitivity, demonstrates the improvement to the Terra ocean color products, and discusses issues that need further investigation. Although the global averages of MODIS Terra ocean color products are now in excellent agreement with those of SeaWiFS and MODIS Aqua, and image quality has been significantly improved, the large corrections applied to the radiometric calibration and polarization sensitivity require additional caution when using the data.

  5. Interpretation of the coastal zone color scanner signature of the Orinoco River plume

    NASA Technical Reports Server (NTRS)

    Hochman, Herschel T.; Mueller-Karger, F. E.; Walsh, John J.

    1994-01-01

    The Caribbean Sea is an area that traditionally has been considered oligotrophic, even though the Orinoco River contributes large quantities of fresh water, nutrients, and other dissolved material to this region during the wet boreal (fall) season. Little is known about the impact of this seasonal river plume, which extends from Venezuela to Puetro Rico shortly after maximum discharge. Here, we present results from a study of the bio-optical characteristics of the Orinoco River plume during the rainy season. The objective was to determine whether the coastal zone color scanner (CZCS) and the follow-on sea-viewing wide-field-of-view sensor (SeaWiFS) satellite instrument can be used to assess the concentrations of substances in large river plumes. Recent in situ shipboard measurements were compared to values from representative historical CZCS images using established bio-optical models. Our goal was to deconvolve the signatures of colored dissolved organic carbon and phytoplankton pigments within satellite images of the Orinoco River plume. We conclude that the models may be used for case 2 waters and that as much as 50 percent of the remotely sensored chlorophyll biomass within the plume is an artifact due to the presence of dissolved organic carbon. Dissolved organic carbon originates from a number of sources, including decay of dead organisms, humic materials from the soil, and gelbstoff.

  6. 10000 pixels wide CMOS frame imager for earth observation from a HALE UAV

    NASA Astrophysics Data System (ADS)

    Delauré, B.; Livens, S.; Everaerts, J.; Kleihorst, R.; Schippers, Gert; de Wit, Yannick; Compiet, John; Banachowicz, Bartosz

    2009-09-01

    MEDUSA is the lightweight high resolution camera, designed to be operated from a solar-powered Unmanned Aerial Vehicle (UAV) flying at stratospheric altitudes. The instrument is a technology demonstrator within the Pegasus program and targets applications such as crisis management and cartography. A special wide swath CMOS imager has been developed by Cypress Semiconductor Cooperation Belgium to meet the specific sensor requirements of MEDUSA. The CMOS sensor has a stitched design comprising a panchromatic and color sensor on the same die. Each sensor consists of 10000*1200 square pixels (5.5μm size, novel 6T architecture) with micro-lenses. The exposure is performed by means of a high efficiency snapshot shutter. The sensor is able to operate at a rate of 30fps in full frame readout. Due to a novel pixel design, the sensor has low dark leakage of the memory elements (PSNL) and low parasitic light sensitivity (PLS). Still it maintains a relative high QE (Quantum efficiency) and a FF (fill factor) of over 65%. It features an MTF (Modulation Transfer Function) higher than 60% at Nyquist frequency in both X and Y directions The measured optical/electrical crosstalk (expressed as MTF) of this 5.5um pixel is state-of-the art. These properties makes it possible to acquire sharp images also in low-light conditions.

  7. Advances in biologically inspired on/near sensor processing

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.

    1999-07-01

    As electro-optic sensors increase in size and frame rate, the data transfer and digital processing resource requirements also increase. In many missions, the spatial area of interest is but a small fraction of the available field of view. Choosing the right region of interest, however, is a challenge and still requires an enormous amount of downstream digital processing resources. In order to filter this ever-increasing amount of data, we look at how nature solves the problem. The Advanced Guidance Division of the Munitions Directorate, Air Force Research Laboratory at Elgin AFB, Florida, has been pursuing research in the are of advanced sensor and image processing concepts based on biologically inspired sensory information processing. A summary of two 'neuromorphic' processing efforts will be presented along with a seeker system concept utilizing this innovative technology. The Neuroseek program is developing a 256 X 256 2-color dual band IRFPA coupled to an optimized silicon CMOS read-out and processing integrated circuit that provides simultaneous full-frame imaging in MWIR/LWIR wavebands along with built-in biologically inspired sensor image processing functions. Concepts and requirements for future such efforts will also be discussed.

  8. Redox sensor proteins for highly sensitive direct imaging of intracellular redox state.

    PubMed

    Sugiura, Kazunori; Nagai, Takeharu; Nakano, Masahiro; Ichinose, Hiroshi; Nakabayashi, Takakazu; Ohta, Nobuhiro; Hisabori, Toru

    2015-02-13

    Intracellular redox state is a critical factor for fundamental cellular functions, including regulation of the activities of various metabolic enzymes as well as ROS production and elimination. Genetically-encoded fluorescent redox sensors, such as roGFP (Hanson, G. T., et al. (2004)) and Redoxfluor (Yano, T., et al. (2010)), have been developed to investigate the redox state of living cells. However, these sensors are not useful in cells that contain, for example, other colored pigments. We therefore intended to obtain simpler redox sensor proteins, and have developed oxidation-sensitive fluorescent proteins called Oba-Q (oxidation balance sensed quenching) proteins. Our sensor proteins derived from CFP and Sirius can be used to monitor the intracellular redox state as their fluorescence is drastically quenched upon oxidation. These blue-shifted spectra of the Oba-Q proteins enable us to monitor various redox states in conjunction with other sensor proteins. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Phytoplankton off the Coast of Washington State

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Clear weather over the Pacific Northwest yesterday gave the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) a good view of this mountain region of the United States. Also, there are several phytoplankton blooms visible offshore. The white areas hugging the California coastline toward the bottom of the image are low-level stratus clouds. SeaWiFS acquired this true-color scene on October 3, 2001. Image courtesy the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE

  10. Phytoplankton Bloom Off Portugal

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Turquoise and greenish swirls marked the presence of a large phytoplankton bloom off the coast of Portugal on April 23, 2002. This true-color image was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite. There are also several fires burning in northwest Spain, near the port city of A Coruna. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of this scene at the sensor's fullest resolution, visit the MODIS Rapidfire site.

  11. Filtered Rayleigh Scattering Measurements in a Buoyant Flowfield

    DTIC Science & Technology

    2007-03-01

    common filter used in FRS applications . Iodine is more attractive than mercury to use in a filter due to its broader range of blocking and transmission...is a 4032x2688 pixel camera with a monochrome or colored CCD imaging sensor. The binning range of the camera is (HxV) 1x1 to 2x8. The manufacturer...center position of the jet of the time averaged image . The z center position is chosen so that it is the average z value bounding helium

  12. Polar research from satellites

    NASA Technical Reports Server (NTRS)

    Thomas, Robert H.

    1991-01-01

    In the polar regions and climate change section, the topics of ocean/atmosphere heat transfer, trace gases, surface albedo, and response to climate warming are discussed. The satellite instruments section is divided into three parts. Part one is about basic principles and covers, choice of frequencies, algorithms, orbits, and remote sensing techniques. Part two is about passive sensors and covers microwave radiometers, medium-resolution visible and infrared sensors, advanced very high resolution radiometers, optical line scanners, earth radiation budget experiment, coastal zone color scanner, high-resolution imagers, and atmospheric sounding. Part three is about active sensors and covers synthetic aperture radar, radar altimeters, scatterometers, and lidar. There is also a next decade section that is followed by a summary and recommendations section.

  13. Spatio-thermal depth correction of RGB-D sensors based on Gaussian processes in real-time

    NASA Astrophysics Data System (ADS)

    Heindl, Christoph; Pönitz, Thomas; Stübl, Gernot; Pichler, Andreas; Scharinger, Josef

    2018-04-01

    Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.

  14. In situ spectroradiometric calibration of EREP imagery and estuarine and coastal oceanography of Block Island sound and adjacent New York coastal waters. [Willcox, Arizona

    NASA Technical Reports Server (NTRS)

    Yost, E. F. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The first part of the study resulted in photographic procedures for making multispectral positive images which greatly enhance the color differences in land detail using an additive color viewer. An additive color analysis of the geologic features near Willcox, Arizona using enhanced black and white multispectral positives allowed compilation of a significant number of unmapped geologic units which do not appear on geologic maps of the area. The second part demonstrated the feasibility of utilizing Skylab remote sensor data to monitor and manage the coastal environment by relating physical, chemical, and biological ship sampled data to S190A, S190B, and S192 image characteristics. Photographic reprocessing techniques were developed which greatly enhanced subtle low brightness water detail. Using these photographic contrast-stretch techniques, two water masses having an extinction coefficient difference of only 0.07 measured simultaneously with the acquisition of S190A data were readily differentiated.

  15. Color and Contour Based Identification of Stem of Coconut Bunch

    NASA Astrophysics Data System (ADS)

    Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin

    2017-08-01

    Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.

  16. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  17. Development of an Inexpensive RGB Color Sensor for the Detection of Hydrogen Cyanide Gas.

    PubMed

    Greenawald, Lee A; Boss, Gerry R; Snyder, Jay L; Reeder, Aaron; Bell, Suzanne

    2017-10-27

    An inexpensive red, green, blue (RGB) color sensor was developed for detecting low ppm concentrations of hydrogen cyanide gas. A piece of glass fiber filter paper containing monocyanocobinamide [CN(H 2 O)Cbi] was placed directly above the RGB color sensor and an on chip LED. Light reflected from the paper was monitored for RGB color change upon exposure to hydrogen cyanide at concentrations of 1.0-10.0 ppm as a function of 25%, 50%, and 85% relative humidity. A rapid color change occurred within 10 s of exposure to 5.0 ppm hydrogen cyanide gas (near the NIOSH recommended exposure limit). A more rapid color change occurred at higher humidity, suggesting a more effective reaction between hydrogen cyanide and CN(H 2 O)Cbi. The sensor could provide the first real time respirator end-of-service-life alert for hydrogen cyanide gas.

  18. A custom multi-modal sensor suite and data analysis pipeline for aerial field phenotyping

    NASA Astrophysics Data System (ADS)

    Bartlett, Paul W.; Coblenz, Lauren; Sherwin, Gary; Stambler, Adam; van der Meer, Andries

    2017-05-01

    Our group has developed a custom, multi-modal sensor suite and data analysis pipeline to phenotype crops in the field using unpiloted aircraft systems (UAS). This approach to high-throughput field phenotyping is part of a research initiative intending to markedly accelerate the breeding process for refined energy sorghum varieties. To date, single rotor and multirotor helicopters, roughly 14 kg in total weight, are being employed to provide sensor coverage over multiple hectaresized fields in tens of minutes. The quick, autonomous operations allow for complete field coverage at consistent plant and lighting conditions, with low operating costs. The sensor suite collects data simultaneously from six sensors and registers it for fusion and analysis. High resolution color imagery targets color and geometric phenotypes, along with lidar measurements. Long-wave infrared imagery targets temperature phenomena and plant stress. Hyperspectral visible and near-infrared imagery targets phenotypes such as biomass and chlorophyll content, as well as novel, predictive spectral signatures. Onboard spectrometers and careful laboratory and in-field calibration techniques aim to increase the physical validity of the sensor data throughout and across growing seasons. Off-line processing of data creates basic products such as image maps and digital elevation models. Derived data products include phenotype charts, statistics, and trends. The outcome of this work is a set of commercially available phenotyping technologies, including sensor suites, a fully integrated phenotyping UAS, and data analysis software. Effort is also underway to transition these technologies to farm management users by way of streamlined, lower cost sensor packages and intuitive software interfaces.

  19. A novel CMOS image sensor system for quantitative loop-mediated isothermal amplification assays to detect food-borne pathogens.

    PubMed

    Wang, Tiantian; Kim, Sanghyo; An, Jeong Ho

    2017-02-01

    Loop-mediated isothermal amplification (LAMP) is considered as one of the alternatives to the conventional PCR and it is an inexpensive portable diagnostic system with minimal power consumption. The present work describes the application of LAMP in real-time photon detection and quantitative analysis of nucleic acids integrated with a disposable complementary-metal-oxide semiconductor (CMOS) image sensor. This novel system works as an amplification-coupled detection platform, relying on a CMOS image sensor, with the aid of a computerized circuitry controller for the temperature and light sources. The CMOS image sensor captures the light which is passing through the sensor surface and converts into digital units using an analog-to-digital converter (ADC). This new system monitors the real-time photon variation, caused by the color changes during amplification. Escherichia coli O157 was used as a proof-of-concept target for quantitative analysis, and compared with the results for Staphylococcus aureus and Salmonella enterica to confirm the efficiency of the system. The system detected various DNA concentrations of E. coli O157 in a short time (45min), with a detection limit of 10fg/μL. The low-cost, simple, and compact design, with low power consumption, represents a significant advance in the development of a portable, sensitive, user-friendly, real-time, and quantitative analytic tools for point-of-care diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Diffractive-optical correlators: chances to make optical image preprocessing as intelligent as human vision

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    2004-10-01

    The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.

  1. Smog Obscures Chinese Coast

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Most of southeastern China has been covered by a thick greyish shroud of aerosol pollution over the last few weeks. The smog is so thick it is difficult to see the surface in some regions of this scene, acquired on January 7, 2002. The city of Hong Kong is the large brown cluster of pixels toward the lower lefthand corner of the image (indicated by the faint black box). The island of Taiwan, due east of mainland China, is also blanketed by the smog. This true-color image was captured by the Moderate-resolution Imaging Spectroradiometer (MODIS) sensor, flying aboard NASA's Terra satellite. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC

  2. Using genetically modified tomato crop plants with purple leaves for absolute weed/crop classification.

    PubMed

    Lati, Ran N; Filin, Sagi; Aly, Radi; Lande, Tal; Levin, Ilan; Eizenberg, Hanan

    2014-07-01

    Weed/crop classification is considered the main problem in developing precise weed-management methodologies, because both crops and weeds share similar hues. Great effort has been invested in the development of classification models, most based on expensive sensors and complicated algorithms. However, satisfactory results are not consistently obtained due to imaging conditions in the field. We report on an innovative approach that combines advances in genetic engineering and robust image-processing methods to detect weeds and distinguish them from crop plants by manipulating the crop's leaf color. We demonstrate this on genetically modified tomato (germplasm AN-113) which expresses a purple leaf color. An autonomous weed/crop classification is performed using an invariant-hue transformation that is applied to images acquired by a standard consumer camera (visible wavelength) and handles variations in illumination intensities. The integration of these methodologies is simple and effective, and classification results were accurate and stable under a wide range of imaging conditions. Using this approach, we simplify the most complicated stage in image-based weed/crop classification models. © 2013 Society of Chemical Industry.

  3. NASA's Earth Observatory and Visible Earth: Imagery and Science on the Internet

    NASA Technical Reports Server (NTRS)

    King, Michael D.; Simmon, Robert B.; Herring, David D.

    2003-01-01

    The purpose of NASA s Earth Observatory and Visible Earth Web sites is to provide freely-accessible locations on the Internet where the public can obtain new satellite imagery (at resolutions up to a given sensor's maximum) and scientific information about our home planet. Climatic and environmental change are the sites main foci. As such, they both contain ample data visualizations and time-series animations that demonstrate geophysical parameters of particular scientific interest, with emphasis on how and why they vary over time. An Image Composite Editor (ICE) tool will be added to the Earth Observatory in October 2002 that will allow visitors to conduct basic analyses of available image data. For example, users may produce scatter plots to correlate images; or they may probe images to find the precise unit values per pixel of a given data product; or they may build their own true-color and false-color images using multi- spectral data. In particular, the sites are designed to be useful to the science community, public media, educators, and students.

  4. Ocean-color Satellites and the Phytoplankton-dust Connection

    NASA Technical Reports Server (NTRS)

    Stegmann, P. M.

    2000-01-01

    Results of a time series of satellite measurements of aerosol radiance made with two ocean-color sensors are presented. Data from the Coastal Zone Color Scanner (CZCS) were collected from 1978 to 1986. The follow-on sensor, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), has been transmitting data since September 1997. Both CZCS and SeaWiFS images successfully depicted regions of well-known, large-scale mineral aerosol plumes, the seasonality of which corresponds to that found by other satellite and land-based platforms. Aerosol radiance extractions were made for two subregions in the North Atlantic, both of which are recipients of regular mineral aerosol deposits originating from northwest Africa. In the almost eight-year time series obtained with CZCS, the annual cycle in both subregions follows a similar pattern each year and agrees well with results from the published literature. However, there is interannual variability and the observed fluctuations may be linked to climatic shifts associated with the North Atlantic Oscillation. The SeaWiFS annual cycle of aerosol radiance in both subregions closely followed that found in the CZCS climatology; SeaWiFS-measured aerosol optical thickness mirrors aerosol radiance to a high degree. The higher temporal resolution offered by the SeaWiFS data demonstrates the sporadic nature of dust events throughout the entire year and not only during the high dust season.

  5. Confocal Microscopy Imaging with an Optical Transition Edge Sensor

    NASA Astrophysics Data System (ADS)

    Fukuda, D.; Niwa, K.; Hattori, K.; Inoue, S.; Kobayashi, R.; Numata, T.

    2018-05-01

    Fluorescence color imaging at an extremely low excitation intensity was performed using an optical transition edge sensor (TES) embedded in a confocal microscope for the first time. Optical TES has the ability to resolve incident single photon energy; therefore, the wavelength of each photon can be measured without spectroscopic elements such as diffraction gratings. As target objects, animal cells labeled with two fluorescent dyes were irradiated with an excitation laser at an intensity below 1 μW. In our confocal system, an optical fiber-coupled TES device is used to detect photons instead of the pinhole and photomultiplier tube used in typical confocal microscopes. Photons emitted from the dyes were collected by the objective lens, and sent to the optical TES via the fiber. The TES measures the wavelength of each photon arriving in an exposure time of 70 ms, and a fluorescent photon spectrum is constructed. This measurement is repeated by scanning the target sample, and finally a two-dimensional RGB-color image is obtained. The obtained image showed that the photons emitted from the dyes of mitochondria and cytoskeletons were clearly resolved at a detection intensity level of tens of photons. TES exhibits ideal performance as a photon detector with a low dark count rate (< 1 Hz) and wavelength resolving power. In the single-mode fiber-coupled system, the confocal microscope can be operated in the super-resolution mode. These features are very promising to realize high-sensitivity and high-resolution photon spectral imaging, and would help avoid cell damage and photobleaching of fluorescence dyes.

  6. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  7. Spectral sharpening of color sensors: diagonal color constancy and beyond.

    PubMed

    Vazquez-Corral, Javier; Bertalmío, Marcelo

    2014-02-26

    It has now been 20 years since the seminal work by Finlayson et al. on the use of spectral sharpening of sensors to achieve diagonal color constancy. Spectral sharpening is still used today by numerous researchers for different goals unrelated to the original goal of diagonal color constancy e.g., multispectral processing, shadow removal, location of unique hues. This paper reviews the idea of spectral sharpening through the lens of what is known today in color constancy, describes the different methods used for obtaining a set of sharpening sensors and presents an overview of the many different uses that have been found for spectral sharpening over the years.

  8. Mathematic modeling the relationship of bacteria number in a dairy product and the color difference measured by a CCD image sensor

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Zhao, Zhigang; Chen, Dongkui; Liu, Yuping

    2005-01-01

    Although many methods, such as bacteria plate count, flow cytometry and impedance method have been broadly used in the dairy industry to quantitate bacteria numbers around the world, none of them is a quick, low cost and easy one. In this study, we proposed to apply the color difference theory in this field to establish a mathematic model to quantitate bacteria number in fresh milk. Preliminary testing results not only indicate that the application of the color difference theory to the new system is practical, but also confirm the theoretical relationship between the numbers of bacteria, incubation time and color difference. The proof of the principal study in this article further suggests that the novel method has the potential to replace the traditional methods to determine bacteria numbers for the food industry.

  9. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  10. Measuring Patient Mobility in the ICU Using a Novel Noninvasive Sensor.

    PubMed

    Ma, Andy J; Rawat, Nishi; Reiter, Austin; Shrock, Christine; Zhan, Andong; Stone, Alex; Rabiee, Anahita; Griffin, Stephanie; Needham, Dale M; Saria, Suchi

    2017-04-01

    To develop and validate a noninvasive mobility sensor to automatically and continuously detect and measure patient mobility in the ICU. Prospective, observational study. Surgical ICU at an academic hospital. Three hundred sixty-two hours of sensor color and depth image data were recorded and curated into 109 segments, each containing 1,000 images, from eight patients. None. Three Microsoft Kinect sensors (Microsoft, Beijing, China) were deployed in one ICU room to collect continuous patient mobility data. We developed software that automatically analyzes the sensor data to measure mobility and assign the highest level within a time period. To characterize the highest mobility level, a validated 11-point mobility scale was collapsed into four categories: nothing in bed, in-bed activity, out-of-bed activity, and walking. Of the 109 sensor segments, the noninvasive mobility sensor was developed using 26 of these from three ICU patients and validated on 83 remaining segments from five different patients. Three physicians annotated each segment for the highest mobility level. The weighted Kappa (κ) statistic for agreement between automated noninvasive mobility sensor output versus manual physician annotation was 0.86 (95% CI, 0.72-1.00). Disagreement primarily occurred in the "nothing in bed" versus "in-bed activity" categories because "the sensor assessed movement continuously," which was significantly more sensitive to motion than physician annotations using a discrete manual scale. Noninvasive mobility sensor is a novel and feasible method for automating evaluation of ICU patient mobility.

  11. Application of Optical Imaging Techniques for Quantification of pH and O2 Dynamicsin Porous Media

    NASA Astrophysics Data System (ADS)

    Li, B.; Seliman, A. F.; Pales, A. R.; Liang, W.; Sams, A.; Darnault, C. J. G.; DeVol, T. A.

    2016-12-01

    Understanding the spatial and temporal distribution of physical and chemical parameters (e.g. pH, O2) is imperative to characterize the behavior of contaminants in a natural environment. The objectives of this research are to calibrate pH and O2 sensor foils, to develop a dual pH/O2 sensor foil, and to apply them into flow and transport experiments, in order to understand the physical and chemical parameters that control contaminant fate and transport in an unsaturated sandy porous medium. In addition, demonstration of a sensor foil that quantifies aqueous uranium concentration will be presented. Optical imaging techniques will be conducted with 2D tanks to investigate the influence of microbial exudates and plant roots on pH and O2 parameters and radionuclides transport. As a non-invasive method, the optical imaging technique utilizes optical chemical sensor films and either a digital camera or a spectrometer to capture the changes with high temporal and spatial resolutions. Sensor foils are made for different parameters by applying dyes to generate favorable fluorescence that is proportional to the parameter of interest. Preliminary results suggested that this method could detect pH ranging from 4.5 to 7.5. The result from uranium foil test with different concentrations in the range of 2 to 8 ppm indicated that a higher concentration of uranium resulted in a greater color intensity.

  12. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  13. Multi-Sensor Mud Detection

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Matthies, Larry H.

    2010-01-01

    Robust mud detection is a critical perception requirement for Unmanned Ground Vehicle (UGV) autonomous offroad navigation. A military UGV stuck in a mud body during a mission may have to be sacrificed or rescued, both of which are unattractive options. There are several characteristics of mud that may be detectable with appropriate UGV-mounted sensors. For example, mud only occurs on the ground surface, is cooler than surrounding dry soil during the daytime under nominal weather conditions, is generally darker than surrounding dry soil in visible imagery, and is highly polarized. However, none of these cues are definitive on their own. Dry soil also occurs on the ground surface, shadows, snow, ice, and water can also be cooler than surrounding dry soil, shadows are also darker than surrounding dry soil in visible imagery, and cars, water, and some vegetation are also highly polarized. Shadows, snow, ice, water, cars, and vegetation can all be disambiguated from mud by using a suite of sensors that span multiple bands in the electromagnetic spectrum. Because there are military operations when it is imperative for UGV's to operate without emitting strong, detectable electromagnetic signals, passive sensors are desirable. JPL has developed a daytime mud detection capability using multiple passive imaging sensors. Cues for mud from multiple passive imaging sensors are fused into a single mud detection image using a rule base, and the resultant mud detection is localized in a terrain map using range data generated from a stereo pair of color cameras.

  14. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  15. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    PubMed

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  16. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  17. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  18. Optical temperature sensor using thermochromic semiconductors

    DOEpatents

    Kronberg, James W.

    1998-01-01

    An optical temperature measuring device utilizes thermochromic semiconductors which vary in color in response to changes in temperature. The thermochromic material is sealed in a glass matrix which allows the temperature sensor to detect high temperatures without breakdown. Cuprous oxide and cadmium sulfide are among the semiconductor materials which provide the best results. The changes in color may be detected visually using a sensor chip and an accompanying color card.

  19. Optical temperature sensor using thermochromic semiconductors

    DOEpatents

    Kronberg, J.W.

    1998-06-30

    An optical temperature measuring device utilizes thermochromic semiconductors which vary in color in response to changes in temperature. The thermochromic material is sealed in a glass matrix which allows the temperature sensor to detect high temperatures without breakdown. Cuprous oxide and cadmium sulfide are among the semiconductor materials which provide the best results. The changes in color may be detected visually using a sensor chip and an accompanying color card. 8 figs.

  20. Surface-roughness considerations for atmospheric correction of ocean color sensors. II: Error in the retrieved water-leaving radiance.

    PubMed

    Gordon, H R; Wang, M

    1992-07-20

    In the algorithm for the atmospheric correction of coastal zone color scanner (CZCS) imagery, it is assumed that the sea surface is flat. Simulations are carried out to assess the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct Sun glitter (either a large solar zenith angle or the sensor tilted away from the specular image of the Sun), the following conclusions appear justified: (1) the error induced by ignoring the surface roughness is less, similar1 CZCS digital count for wind speeds up to approximately 17 m/s, and therefore can be ignored for this sensor; (2) the roughness-induced error is much more strongly dependent on the wind speed than on the wave shadowing, suggesting that surface effects can be adequately dealt with without precise knowledge of the shadowing; and (3) the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness, suggesting that in refining algorithms for future sensors more effort should be placed on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  1. Medical color displays and their color calibration: investigations of various calibration methods, tools, and potential improvement in color difference ΔE

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Hashmi, Syed F.; Dallas, William J.; Krupinski, Elizabeth A.; Rehm, Kelly; Fan, Jiahua

    2010-08-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ΔE = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target. As an extension of this fundamental work1, we further improved our calibration method by defining concrete calibration parameters for the display, using the NEC wide gamut puck, and making sure that those calibration parameters did conform, with the help of a state of the art Spectroradiometer, PR670. As a result of this addition of the PR670, and also an in-house developed method of profiling and characterization, it appears that there was much improvement in ΔE, the color difference.

  2. Assimilation of SeaWiFS Ocean Chlorophyll Data into a Three-Dimensional Global Ocean Model

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.

    2005-01-01

    Assimilation of satellite ocean color data is a relatively new phenomenon in ocean sciences. However, with routine observations from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), launched in late 1997, and now with new data from the Moderate Resolution Imaging Spectroradometer (MODIS) Aqua, there is increasing interest in ocean color data assimilation. Here SeaWiFS chlorophyll data were assimilated with an established thre-dimentional global ocean model. The assimilation improved estimates of hlorophyll and primary production relative to a free-run (no assimilation) model. This represents the first attempt at ocean color data assimilation using NASA satellites in a global model. The results suggest the potential of assimilation of satellite ocean chlorophyll data for improving models.

  3. Development of on package indicator sensor for real-time monitoring of meat quality

    PubMed Central

    Shukla, Vivek; Kandeepan, G.; Vishnuraj, M. R.

    2015-01-01

    Aim: The aim was to develop an indicator sensor for real-time monitoring of meat quality and to compare the response of indicator sensor with meat quality parameters at ambient temperature. Materials and Methods: Indicator sensor was prepared using bromophenol blue (1% w/v) as indicator solution and filter paper as indicator carrier. Indicator sensor was fabricated by coating indicator solution onto carrier by centrifugation. To observe the response of indicator sensor buffalo meat was packed in polystyrene foam trays covered with PVC film and indicator sensor was attached to the inner side of packaging film. The pattern of color change in indicator sensor was monitored and compared with meat quality parameters viz. total volatile basic nitrogen, D-glucose, standard plate count and tyrosine value to correlate ability of indicator sensor for its suitability to predict the meat quality and storage life. Results: The indicator sensor changed its color from yellow to blue starting from margins during the storage period of 24 h at ambient temperature and this correlated well with changes in meat quality parameters. Conclusions: The indicator sensor can be used for real-time monitoring of meat quality as the color of indicator sensor changed from yellow to blue starting from margins when meat deteriorates with advancement of the storage period. Thus by observing the color of indicator sensor quality of meat and shelf life can be predicted. PMID:27047103

  4. A diamond-based scanning probe spin sensor operating at low temperature in ultra-high vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer-Nolte, E.; Wrachtrup, J.; 3rd Institute of Physics and Research Center SCoPE, University Stuttgart, 70569 Stuttgart

    2014-01-15

    We present the design and performance of an ultra-high vacuum (UHV) low temperature scanning probe microscope employing the nitrogen-vacancy color center in diamond as an ultrasensitive magnetic field sensor. Using this center as an atomic-size scanning probe has enabled imaging of nanoscale magnetic fields and single spins under ambient conditions. In this article we describe an experimental setup to operate this sensor in a cryogenic UHV environment. This will extend the applicability to a variety of molecular systems due to the enhanced target spin lifetimes at low temperature and the controlled sample preparation under UHV conditions. The instrument combines amore » tuning-fork based atomic force microscope (AFM) with a high numeric aperture confocal microscope and the facilities for application of radio-frequency (RF) fields for spin manipulation. We verify a sample temperature of <50 K even for strong laser and RF excitation and demonstrate magnetic resonance imaging with a magnetic AFM tip.« less

  5. Temperature-Sensitive Coating Sensor Based on Hematite

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy J.

    2011-01-01

    A temperature-sensitive coating, based on hematite (iron III oxide), has been developed to measure surface temperature using spectral techniques. The hematite powder is added to a binder that allows the mixture to be painted on the surface of a test specimen. The coating dynamically changes its relative spectral makeup or color with changes in temperature. The color changes from a reddish-brown appearance at room temperature (25 C) to a black-gray appearance at temperatures around 600 C. The color change is reversible and repeatable with temperature cycling from low to high and back to low temperatures. Detection of the spectral changes can be recorded by different sensors, including spectrometers, photodiodes, and cameras. Using a-priori information obtained through calibration experiments in known thermal environments, the color change can then be calibrated to yield accurate quantitative temperature information. Temperature information can be obtained at a point, or over an entire surface, depending on the type of equipment used for data acquisition. Because this innovation uses spectrophotometry principles of operation, rather than the current methods, which use photoluminescence principles, white light can be used for illumination rather than high-intensity short wavelength excitation. The generation of high-intensity white (or potentially filtered long wavelength light) is much easier, and is used more prevalently for photography and video technologies. In outdoor tests, the Sun can be used for short durations as an illumination source as long as the amplitude remains relatively constant. The reflected light is also much higher in intensity than the emitted light from the inefficient current methods. Having a much brighter surface allows a wider array of detection schemes and devices. Because color change is the principle of operation, the development of high-quality, lower-cost digital cameras can be used for detection, as opposed to the high-cost imagers needed for intensity measurements with the current methods. Alternative methods of detection are possible to increase the measurement sensitivity. For example, a monochrome camera can be used with an appropriate filter and a radiometric measurement of normalized intensity change that is proportional to the change coating temperature. Using different spectral regions yields different sensitivities and calibration curves for converting intensity change to temperature units. Alternatively, using a color camera, a ratio of the standard red, green, and blue outputs can be used as a self-referenced change. The blue region (less than 500 nm) does not change nearly as much as the red region (greater than 575 nm), so a ratio of color intensities will yield a calibrated temperature image. The new temperature sensor coating is easy to apply, is inexpensive, can contour complex shape surfaces, and can be a global surface measurement system based on spectrophotometry. The color change, or relative intensity change, at different colors makes the optical detection under white light illumination, and associated interpretation, much easier to measure and interpret than in the detection systems of the current methods.

  6. Atmospheric correction using near-infrared bands for satellite ocean color data processing in the turbid western Pacific region.

    PubMed

    Wang, Menghua; Shi, Wei; Jiang, Lide

    2012-01-16

    A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.

  7. A fully automated colorimetric sensing device using smartphone for biomolecular quantification

    NASA Astrophysics Data System (ADS)

    Dutta, Sibasish; Nath, Pabitra

    2017-03-01

    In the present work, the use of smartphone for colorimetric quantification of biomolecules has been demonstrated. As a proof-of-concept, BSA protein and carbohydrate have been used as biomolecular sample. BSA protein and carbohydrate at different concentrations have been treated with Lowry's reagent and Anthrone's reagent respectively . The change in color of the reagent-treated samples at different concentrations have been recorded with the camera of a smartphone in combination with a custom designed optomechanical hardware attachment. This change in color of the reagent-treated samples has been correlated with color channels of two different color models namely RGB (Red Green Blue) and HSV (Hue Saturation and Value) model. In addition to that, the change in color intensity has also been correlated with the grayscale value for each of the imaged sample. A custom designed android app has been developed to quantify the bimolecular concentration and display the result in the phone itself. The obtained results have been compared with that of standard spectrophotometer usually considered for the purpose and highly reliable data have been obtained with the designed sensor. The device is robust, portable and low cost as compared to its commercially available counterparts. The data obtained from the sensor can be transmitted to anywhere in the world through the existing cellular network. It is envisioned that the designed sensing device would find wide range of applications in the field of analytical and bioanalytical sensing research.

  8. Potential and Limitations of Low-Cost Unmanned Aerial Systems for Monitoring Altitudinal Vegetation Phenology in the Tropics

    NASA Astrophysics Data System (ADS)

    Silva, T. S. F.; Torres, R. S.; Morellato, P.

    2017-12-01

    Vegetation phenology is a key component of ecosystem function and biogeochemical cycling, and highly susceptible to climatic change. Phenological knowledge in the tropics is limited by lack of monitoring, traditionally done by laborious direct observation. Ground-based digital cameras can automate daily observations, but also offer limited spatial coverage. Imaging by low-cost Unmanned Aerial Systems (UAS) combines the fine resolution of ground-based methods with and unprecedented capability for spatial coverage, but challenges remain in producing color-consistent multitemporal images. We evaluated the applicability of multitemporal UAS imaging to monitor phenology in tropical altitudinal grasslands and forests, answering: 1) Can very-high resolution aerial photography from conventional digital cameras be used to reliably monitor vegetative and reproductive phenology? 2) How is UAS monitoring affected by changes in illumination and by sensor physical limitations? We flew imaging missions monthly from Feb-16 to Feb-17, using a UAS equipped with an RGB Canon SX260 camera. Flights were carried between 10am and 4pm, at 120-150m a.g.l., yielding 5-10cm spatial resolution. To compensate illumination changes caused by time of day, season and cloud cover, calibration was attempted using reference targets and empirical models, as well as color space transformations. For vegetative phenological monitoring, multitemporal response was severely affected by changes in illumination conditions, strongly confounding the phenological signal. These variations could not be adequately corrected through calibration due to sensor limitations. For reproductive phenology, the very-high resolution of the acquired imagery allowed discrimination of individual reproductive structures for some species, and its stark colorimetric differences to vegetative structures allowed detection of the reproductive timing on the HSV color space, despite illumination effects. We conclude that reliable vegetative phenology monitoring may exceed the capabilities of consumer cameras, but reproductive phenology can be successfully monitored for species with conspicuous reproductive structures. Further research is being conducted to improve calibration methods and information extraction through machine learning.

  9. Performance measurement of commercial electronic still picture cameras

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te

    1998-06-01

    Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.

  10. Microgravity

    NASA Image and Video Library

    2001-01-24

    Image of soot (smoke) plume made for the Laminar Soot Processes (LSP) experiment during the Microgravity Sciences Lab-1 mission in 1997. LSP-2 will fly in the STS-107 Research 1 mission in 2002. The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner, similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a temperature sensor, and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.

  11. Application of Hyperspectal Techniques to Monitoring & Management of Invasive Plant Species Infestation

    DTIC Science & Technology

    2008-01-09

    The image data as acquired from the sensor is a data cloud in multi- dimensional space with each band generating an axis of dimension. When the data... The color of a material is defined by the direction of its unit vector in n- dimensional spectral space . The length of the vector relates only to how...to n- dimensional space . SAM determines the similarity

  12. Open source software and low cost sensors for teaching UAV science

    NASA Astrophysics Data System (ADS)

    Kefauver, S. C.; Sanchez-Bragado, R.; El-Haddad, G.; Araus, J. L.

    2016-12-01

    Drones, also known as UASs (unmanned aerial systems), UAVs (Unmanned Aerial Vehicles) or RPAS (Remotely piloted aircraft systems), are both useful advanced scientific platforms and recreational toys that are appealing to younger generations. As such, they can make for excellent education tools as well as low-cost scientific research project alternatives. However, the process of taking pretty pictures to remote sensing science can be daunting if one is presented with only expensive software and sensor options. There are a number of open-source tools and low cost platform and sensor options available that can provide excellent scientific research results, and, by often requiring more user-involvement than commercial software and sensors, provide even greater educational benefits. Scale-invariant feature transform (SIFT) algorithm implementations, such as the Microsoft Image Composite Editor (ICE), which can create quality 2D image mosaics with some motion and terrain adjustments and VisualSFM (Structure from Motion), which can provide full image mosaicking with movement and orthorectification capacities. RGB image quantification using alternate color space transforms, such as the BreedPix indices, can be calculated via plugins in the open-source software Fiji (http://fiji.sc/Fiji; http://github.com/george-haddad/CIMMYT). Recent analyses of aerial images from UAVs over different vegetation types and environments have shown RGB metrics can outperform more costly commercial sensors. Specifically, Hue-based pixel counts, the Triangle Greenness Index (TGI), and the Normalized Green Red Difference Index (NGRDI) consistently outperformed NDVI in estimating abiotic and biotic stress impacts on crop health. Also, simple kits are available for NDVI camera conversions. Furthermore, suggestions for multivariate analyses of the different RGB indices in the "R program for statistical computing", such as classification and regression trees can allow for a more approachable interpretation of results in the classroom.

  13. Space Radar Image of Manaus, Brazil

    NASA Image and Video Library

    1999-05-01

    These two false-color images of the Manaus region of Brazil in South America were acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar on board the space shuttle Endeavour. The image at left was acquired on April 12, 1994, and the image at right was acquired on October 3, 1994. The area shown is approximately 8 kilometers by 40 kilometers (5 miles by 25 miles). The two large rivers in this image, the Rio Negro (at top) and the Rio Solimoes (at bottom), combine at Manaus (west of the image) to form the Amazon River. The image is centered at about 3 degrees south latitude and 61 degrees west longitude. North is toward the top left of the images. The false colors were created by displaying three L-band polarization channels: red areas correspond to high backscatter, horizontally transmitted and received, while green areas correspond to high backscatter, horizontally transmitted and vertically received. Blue areas show low returns at vertical transmit/receive polarization; hence the bright blue colors of the smooth river surfaces can be seen. Using this color scheme, green areas in the image are heavily forested, while blue areas are either cleared forest or open water. The yellow and red areas are flooded forest or floating meadows. The extent of the flooding is much greater in the April image than in the October image and appears to follow the 10-meter (33-foot) annual rise and fall of the Amazon River. The flooded forest is a vital habitat for fish, and floating meadows are an important source of atmospheric methane. These images demonstrate the capability of SIR-C/X-SAR to study important environmental changes that are impossible to see with optical sensors over regions such as the Amazon, where frequent cloud cover and dense forest canopies block monitoring of flooding. Field studies by boat, on foot and in low-flying aircraft by the University of California at Santa Barbara, in collaboration with Brazil's Instituto Nacional de Pesguisas Estaciais, during the first and second flights of the SIR-C/X-SAR system have validated the interpretation of the radar images. http://photojournal.jpl.nasa.gov/catalog/PIA01735

  14. [Present status and trend of heart fluid mechanics research based on medical image analysis].

    PubMed

    Gan, Jianhong; Yin, Lixue; Xie, Shenghua; Li, Wenhua; Lu, Jing; Luo, Anguo

    2014-06-01

    With introduction of current main methods for heart fluid mechanics researches, we studied the characteristics and weakness for three primary analysis methods based on magnetic resonance imaging, color Doppler ultrasound and grayscale ultrasound image, respectively. It is pointed out that particle image velocity (PIV), speckle tracking and block match have the same nature, and three algorithms all adopt block correlation. The further analysis shows that, with the development of information technology and sensor, the research for cardiac function and fluid mechanics will focus on energy transfer process of heart fluid, characteristics of Chamber wall related to blood fluid and Fluid-structure interaction in the future heart fluid mechanics fields.

  15. Garden City, Kansas

    NASA Image and Video Library

    2017-12-08

    Center pivot irrigation systems create red circles of healthy vegetation in this image of croplands near Garden City, Kansas. This image was acquired by Landsat 7’s Enhanced Thematic Mapper plus (ETM+) sensor on September 25, 2000. This is a false-color composite image made using near infrared, red, and green wavelengths. The image has also been sharpened using the sensor’s panchromatic band. Credit: NASA/GSFC/Landsat NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  16. Color constancy: enhancing von Kries adaption via sensor transformations

    NASA Astrophysics Data System (ADS)

    Finlayson, Graham D.; Drew, Mark S.; Funt, Brian V.

    1993-09-01

    Von Kries adaptation has long been considered a reasonable vehicle for color constancy. Since the color constancy performance attainable via the von Kries rule strongly depends on the spectral response characteristics of the human cones, we consider the possibility of enhancing von Kries performance by constructing new `sensors' as linear combinations of the fixed cone sensitivity functions. We show that if surface reflectances are well-modeled by 3 basis functions and illuminants by 2 basis functions then there exists a set of new sensors for which von Kries adaptation can yield perfect color constancy. These new sensors can (like the cones) be described as long-, medium-, and short-wave sensitive; however, both the new long- and medium-wave sensors have sharpened sensitivities -- their support is more concentrated. The new short-wave sensor remains relatively unchanged. A similar sharpening of cone sensitivities has previously been observed in test and field spectral sensitivities measured for the human eye. We present simulation results demonstrating improved von Kries performance using the new sensors even when the restrictions on the illumination and reflectance are relaxed.

  17. Color sensor and neural processor on one chip

    NASA Astrophysics Data System (ADS)

    Fiesler, Emile; Campbell, Shannon R.; Kempem, Lother; Duong, Tuan A.

    1998-10-01

    Low-cost, compact, and robust color sensor that can operate in real-time under various environmental conditions can benefit many applications, including quality control, chemical sensing, food production, medical diagnostics, energy conservation, monitoring of hazardous waste, and recycling. Unfortunately, existing color sensor are either bulky and expensive or do not provide the required speed and accuracy. In this publication we describe the design of an accurate real-time color classification sensor, together with preprocessing and a subsequent neural network processor integrated on a single complementary metal oxide semiconductor (CMOS) integrated circuit. This one-chip sensor and information processor will be low in cost, robust, and mass-producible using standard commercial CMOS processes. The performance of the chip and the feasibility of its manufacturing is proven through computer simulations based on CMOS hardware parameters. Comparisons with competing methodologies show a significantly higher performance for our device.

  18. ASTER First Views of San Francisco River, Brazil - Visible/near Infrared VNIR Image monochrome

    NASA Image and Video Library

    2000-03-11

    This image of the San Francisco River channel, and its surrounding flood zone, in Brazil was acquired by band 3N of ASTER's Visible/Near Infrared sensor. The surrounding area along the river channel in light gray to white could be covered by dense tropical rain forests. The water surface of the San Francisco River shows rather gray color as compared to small lakes and tributaries, which could indicate that the river water is contaminated by suspended material. The size of image: 20 km x 20 km approx., ground resolution 15 m x 15 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02451

  19. Planetary investigation utilizing an imaging spectrometer system based upon charge injection technology

    NASA Technical Reports Server (NTRS)

    Wattson, R. B.; Harvey, P.; Swift, R.

    1975-01-01

    An intrinsic silicon charge injection device (CID) television sensor array has been used in conjunction with a CaMoO4 colinear tunable acousto optic filter, a 61 inch reflector, a sophisticated computer system, and a digital color TV scan converter/computer to produce near IR images of Saturn and Jupiter with 10A spectral resolution and approximately 3 inch spatial resolution. The CID camera has successfully obtained digitized 100 x 100 array images with 5 minutes of exposure time, and slow-scanned readout to a computer. Details of the equipment setup, innovations, problems, experience, data and final equipment performance limits are given.

  20. Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.

    2018-05-01

    One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.

  1. A Simple Method Based on the Application of a CCD Camera as a Sensor to Detect Low Concentrations of Barium Sulfate in Suspension

    PubMed Central

    de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba

    2011-01-01

    The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607

  2. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  3. Video processing of remote sensor data applied to uranium exploration in Wyoming. [Roll-front U deposits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, R.A.; Marrs, R.W.; Crockell, F.

    1979-06-30

    LANDSAT satellite imagery and aerial photography can be used to map areas of altered sandstone associated with roll-front uranium deposits. Image data must be enhanced so that alteration spectral contrasts can be seen, and video image processing is a fast, low-cost, and efficient tool. For LANDSAT data, the 7/4 ratio produces the best enhancement of altered sandstone. The 6/4 ratio is most effective for color infrared aerial photography. Geochemical and mineralogical associations occur in unaltered, altered, and ore roll-front zones. Samples from Pumpkin Buttes show that iron is the primary coloring agent which makes alteration visually detectable. Eh and pHmore » changes associated with passage of a roll front cause oxidation of magnetite and pyrite to hematite, goethite, and limonite in the host sandstone, thereby producing the alteration. Statistical analysis show that the detectability of geochemical and color zonation in host sands is weakened by soil-forming processes. Alteration can only be mapped in areas of thin soil cover and moderate to sparse vegetative cover.« less

  4. Measuring Patient Mobility in the ICU Using a Novel Noninvasive Sensor

    PubMed Central

    Ma, Andy J.; Rawat, Nishi; Reiter, Austin; Shrock, Christine; Zhan, Andong; Stone, Alex; Rabiee, Anahita; Griffin, Stephanie; Needham, Dale M.; Saria, Suchi

    2017-01-01

    Objectives To develop and validate a noninvasive mobility sensor to automatically and continuously detect and measure patient mobility in the ICU. Design Prospective, observational study. Setting Surgical ICU at an academic hospital. Patients Three hundred sixty-two hours of sensor color and depth image data were recorded and curated into 109 segments, each containing 1,000 images, from eight patients. Interventions None. Measurements and Main Results Three Microsoft Kinect sensors (Microsoft, Beijing, China) were deployed in one ICU room to collect continuous patient mobility data. We developed software that automatically analyzes the sensor data to measure mobility and assign the highest level within a time period. To characterize the highest mobility level, a validated 11-point mobility scale was collapsed into four categories: nothing in bed, in-bed activity, out-of-bed activity, and walking. Of the 109 sensor segments, the noninvasive mobility sensor was developed using 26 of these from three ICU patients and validated on 83 remaining segments from five different patients. Three physicians annotated each segment for the highest mobility level. The weighted Kappa (κ) statistic for agreement between automated noninvasive mobility sensor output versus manual physician annotation was 0.86 (95% CI, 0.72–1.00). Disagreement primarily occurred in the “nothing in bed” versus “in-bed activity” categories because “the sensor assessed movement continuously,” which was significantly more sensitive to motion than physician annotations using a discrete manual scale. Conclusions Noninvasive mobility sensor is a novel and feasible method for automating evaluation of ICU patient mobility. PMID:28291092

  5. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  6. Smoke over Montana and Wyoming

    NASA Technical Reports Server (NTRS)

    2002-01-01

    California was not the only western state affected by fire during the last weekend of July. Parts of Montana and Wyoming were covered by a thick pall of smoke on July 30, 2000. This true-color image was captured by the Sea-viewing Wide Field-of-view Sensor (SeaWiFS). It is much easier to distinguish smoke from cloud in the color SeaWiFS imagery than the black and white Geostationary Operational Environmental Satellite (GOES) imagery. However, GOES provides almost continuous coverage (animation of Sequoia National Forest fire) and has thermal infrared bands (Extensive Fires in the Western U.S.) which detect the heat from fires. On Monday July 31, 2000, eight fires covering 105,000 acres were burning in Montana, and three fires covering 12,000 acres were burning in Wyoming. Image provided by the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE

  7. Incorporating active-learning techniques into the photonics-related teaching in the Erasmus Mundus Master in "Color in Informatics and Media Technology"

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Rubiño, Manuel; Hernández-Andrés, Javier; Nieves, Juan L.

    2014-07-01

    In this work, we present a teaching methodology using active-learning techniques in the course "Devices and Instrumentation" of the Erasmus Mundus Master's Degree in "Color in Informatics and Media Technology" (CIMET). A part of the course "Devices and Instrumentation" of this Master's is dedicated to the study of image sensors and methods to evaluate their image quality. The teaching methodology that we present consists of incorporating practical activities during the traditional lectures. One of the innovative aspects of this teaching methodology is that students apply the concepts and methods studied in class to real devices. For this, students use their own digital cameras, webcams, or cellphone cameras in class. These activities provide students a better understanding of the theoretical subject given in class and encourage the active participation of students.

  8. Supercontinuum as a light source for miniaturized endoscopes.

    PubMed

    Lu, M K; Lin, H Y; Hsieh, C C; Kao, F J

    2016-09-01

    In this work, we have successfully implemented supercontinuum based illumination through single fiber coupling. The integration of a single fiber illumination with a miniature CMOS sensor forms a very slim and powerful camera module for endoscopic imaging. A set of tests and in vivo animal experiments are conducted accordingly to characterize the corresponding illuminance, spectral profile, intensity distribution, and image quality. The key illumination parameters of the supercontinuum, including color rendering index (CRI: 72%~97%) and correlated color temperature (CCT: 3,100K~5,200K), are modified with external filters and compared with those from a LED light source (CRI~76% & CCT~6,500K). The very high spatial coherence of the supercontinuum allows high luminosity conduction through a single multimode fiber (core size~400μm), whose distal end tip is attached with a diffussion tip to broaden the solid angle of illumination (from less than 10° to more than 80°).

  9. System design for 3D wound imaging using low-cost mobile devices

    NASA Astrophysics Data System (ADS)

    Sirazitdinova, Ekaterina; Deserno, Thomas M.

    2017-03-01

    The state-of-the art method of wound assessment is a manual, imprecise and time-consuming procedure. Per- formed by clinicians, it has limited reproducibility and accuracy, large time consumption and high costs. Novel technologies such as laser scanning microscopy, multi-photon microscopy, optical coherence tomography and hyper-spectral imaging, as well as devices relying on the structured light sensors, make accurate wound assessment possible. However, such methods have limitations due to high costs and may lack portability and availability. In this paper, we present a low-cost wound assessment system and architecture for fast and accurate cutaneous wound assessment using inexpensive consumer smartphone devices. Computer vision techniques are applied either on the device or the server to reconstruct wounds in 3D as dense models, which are generated from images taken with a built-in single camera of a smartphone device. The system architecture includes imaging (smartphone), processing (smartphone or PACS) and storage (PACS) devices. It supports tracking over time by alignment of 3D models, color correction using a reference color card placed into the scene and automatic segmentation of wound regions. Using our system, we are able to detect and document quantitative characteristics of chronic wounds, including size, depth, volume, rate of healing, as well as qualitative characteristics as color, presence of necrosis and type of involved tissue.

  10. REMOTE SENSING IN OCEANOGRAPHY.

    DTIC Science & Technology

    remote sensing from satellites. Sensing of oceanographic variables from aircraft began with the photographing of waves and ice. Since then remote measurement of sea surface temperatures and wave heights have become routine. Sensors tested for oceanographic applications include multi-band color cameras, radar scatterometers, infrared spectrometers and scanners, passive microwave radiometers, and radar imagers. Remote sensing has found its greatest application in providing rapid coverage of large oceanographic areas for synoptic and analysis and

  11. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    PubMed Central

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  12. Organic Photodiodes: The Future of Full Color Detection and Image Sensing.

    PubMed

    Jansen-van Vuuren, Ross D; Armin, Ardalan; Pandey, Ajay K; Burn, Paul L; Meredith, Paul

    2016-06-01

    Major growth in the image sensor market is largely as a result of the expansion of digital imaging into cameras, whether stand-alone or integrated within smart cellular phones or automotive vehicles. Applications in biomedicine, education, environmental monitoring, optical communications, pharmaceutics and machine vision are also driving the development of imaging technologies. Organic photodiodes (OPDs) are now being investigated for existing imaging technologies, as their properties make them interesting candidates for these applications. OPDs offer cheaper processing methods, devices that are light, flexible and compatible with large (or small) areas, and the ability to tune the photophysical and optoelectronic properties - both at a material and device level. Although the concept of OPDs has been around for some time, it is only relatively recently that significant progress has been made, with their performance now reaching the point that they are beginning to rival their inorganic counterparts in a number of performance criteria including the linear dynamic range, detectivity, and color selectivity. This review covers the progress made in the OPD field, describing their development as well as the challenges and opportunities. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Integrated High Resolution Digital Color Light Sensor in 130 nm CMOS Technology.

    PubMed

    Strle, Drago; Nahtigal, Uroš; Batistell, Graciele; Zhang, Vincent Chi; Ofner, Erwin; Fant, Andrea; Sturm, Johannes

    2015-07-22

    This article presents a color light detection system integrated in 130 nm CMOS technology. The sensors and corresponding electronics detect light in a CIE XYZ color luminosity space using on-chip integrated sensors without any additional process steps, high-resolution analog-to-digital converter, and dedicated DSP algorithm. The sensor consists of a set of laterally arranged integrated photodiodes that are partly covered by metal, where color separation between the photodiodes is achieved by lateral carrier diffusion together with wavelength-dependent absorption. A high resolution, hybrid, ∑∆ ADC converts each photo diode's current into a 22-bit digital result, canceling the dark current of the photo diodes. The digital results are further processed by the DSP, which calculates normalized XYZ or RGB color and intensity parameters using linear transformations of the three photo diode responses by multiplication of the data with a transformation matrix, where the coefficients are extracted by training in combination with a pseudo-inverse operation and the least-mean square approximation. The sensor system detects the color light parameters with 22-bit accuracy, consumes less than 60 μA on average at 10 readings per second, and occupies approx. 0.8 mm(2) of silicon area (including three photodiodes and the analog part of the ADC). The DSP is currently implemented on FPGA.

  14. High-speed line-scan camera with digital time delay integration

    NASA Astrophysics Data System (ADS)

    Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.

  15. BreedVision--a multi-sensor platform for non-destructive field-based phenotyping in plant breeding.

    PubMed

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-02-27

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies.

  16. BreedVision — A Multi-Sensor Platform for Non-Destructive Field-Based Phenotyping in Plant Breeding

    PubMed Central

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C.; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-01-01

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies. PMID:23447014

  17. Common aperture multispectral spotter camera: Spectro XR

    NASA Astrophysics Data System (ADS)

    Petrushevsky, Vladimir; Freiman, Dov; Diamant, Idan; Giladi, Shira; Leibovich, Maor

    2017-10-01

    The Spectro XRTM is an advanced color/NIR/SWIR/MWIR 16'' payload recently developed by Elbit Systems / ELOP. The payload's primary sensor is a spotter camera with common 7'' aperture. The sensor suite includes also MWIR zoom, EO zoom, laser designator or rangefinder, laser pointer / illuminator and laser spot tracker. Rigid structure, vibration damping and 4-axes gimbals enable high level of line-of-sight stabilization. The payload's list of features include multi-target video tracker, precise boresight, strap-on IMU, embedded moving map, geodetic calculations suite, and image fusion. The paper describes main technical characteristics of the spotter camera. Visible-quality, all-metal front catadioptric telescope maintains optical performance in wide range of environmental conditions. High-efficiency coatings separate the incoming light into EO, SWIR and MWIR band channels. Both EO and SWIR bands have dual FOV and 3 spectral filters each. Several variants of focal plane array formats are supported. The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of detail far exceeding one of VGA-equipped FLIRs. The Spectro XR offers level of performance typically associated with larger and heavier payloads.

  18. Portable equipment for determining ripeness in Hass avocado using a low cost color sensor

    NASA Astrophysics Data System (ADS)

    Toro, Jessica; Daza, Carolina; Vega, Fabio; Diaz, Leonardo; Torres, Cesar

    2015-08-01

    The avocado is a one climacteric fruit that not ripe on the tree because it produces a maturation inhibitor that passes the fruit through the pedicel, the ripening occurs naturally during storage or to be induced as required. In post-harvest ripening stage is basically determined by experience of the farmer or buyer. In this word us developed portable equipment for determining ripeness is hass avocado using a low cost sensor color sensor TC3200 and LCD for display result. The prototype read of RGB color frequencies of the sensor and estimates the stage of ripeness in fourth different stages in post-harvest ripening.

  19. Use of thermal infrared remote sensing data for fisheries, environmental monitoring, oil and gas exploration, and ship routing.

    NASA Astrophysics Data System (ADS)

    Roffer, M. A.; Gawlikowski, G.; Muller-Karger, F.; Schaudt, K.; Upton, M.; Wall, C.; Westhaver, D.

    2006-12-01

    Thermal infrared (TIR) and ocean color remote sensing data (1.1 - 4.0 km) are being used as the primary data source in decision making systems for fisheries management, commercial and recreational fishing advisory services, fisheries research, environmental monitoring, oil and gas operations, and ship routing. Experience over the last 30 years suggests that while ocean color and other remote sensing data (e.g. altimetry) are important data sources, TIR presently yields the most useful data for studying ocean surface circulation synoptically on a daily basis. This is due primarily to the greater temporal resolution, but also due to one's better understanding of the dynamics of sea surface temperature compared with variations in ocean color and the spatial limitations of altimeter data. Information derived from commercial operations and research is being used to improve the operational efficiency of fishing vessels (e.g. reduce search time and increase catch rate) and to improve our understanding of the variations in catch distribution and rate needed to properly manage fisheries. This information is also being used by the oil and gas industry to minimize transit time and thus, save costs (e.g., tug charter, insurance), to increase production and revenue up to 500K dollars a day. The data are also be used to reduce the risk of equipment loss, loss of time and revenue to sudden and unexpected currents such as eddies. Sequential image analysis integrating TIR and ocean color provided near-real time, synoptic visualization of the rapid and wide dispersal of coastal waters from the northern Gulf of Mexico following Hurricanes Katrina and Rita in September 2005. The satellite data and analysis techniques have also been used to monitor the effects and movement of other potential environmentally damaging substances, such as dispersing nutrient enriched waste water offshore. A review of our experience in several commercial applications and research efforts will reinforce the importance and benefits of TIR compared to other remote sensing data. Examples of sequential image analysis and side by side image comparisons will demonstrate the utility of TIR for oceanographic applications. This will emphasize that TIR research and development be continued, as well as, implemented on all new research sensor packages. Sea surface temperature, derived from TIR, has the longest history and reliability for synoptic observations of ocean circulation. Thus, any new sensor packages should be fitted with TIR at the same temporal and spatial resolution to facilitate an objective comparison of the utility of the new sensors compared with the TIR.

  20. Surface currents in the Bohai Sea derived from the Korean Geostationary Ocean Color Imager (GOCI)

    NASA Astrophysics Data System (ADS)

    Jiang, L.; Wang, M.

    2016-02-01

    The first geostationary ocean color satellite sensor, the Geostationary Ocean Color Imager (GOCI) onboard the Korean Communication, Ocean, and Meteorological Satellite can monitor and measure ocean phenomena over an area of 2500 × 2500 km2 around the western Pacific region centered at 36°N and 130°E. Hourly measurements during the day around 9:00 to 16:00 local time are a unique capability of GOCI to monitor ocean features of higher temporal variability. In this presentation, we show some recent results of GOCI-derived ocean surface currents in the Bohai Sea using the Maximum Cross-Correlation (MCC) feature tracking method and compare the results with altimetry-inversed tidal current observations produced from Oregon State University (OSU) Tidal Inversion Software (OTIS). The performance of the GOCI-based MCC method is assessed and the discrepancies between the GOCI- and OTIS-derived currents are evaluated. A series of sensitivity studies are conducted with images from various satellite products and of various time differences, MCC adjustable parameters, and influence from other forcings such as wind, to find the best setups for optimal MCC performance. Our results demonstrate that GOCI can effectively provide real-time monitoring of not only water optical, biological, and biogeochemical variability, but also the physical dynamics in the region.

  1. Phytoplankton off the West Coast of Africa

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Just off the coast of West Africa, persistent northeasterly trade winds often churn up deep ocean water. When the nutrients in these deep waters reach the ocean's surface, they often give rise to large blooms of phytoplankton. This image of the Mauritanian coast shows swirls of phytoplankton fed by the upwelling of nutrient-rich water. The scene was acquired by the Medium Resolution Imaging Spectrometer (MERIS) aboard the European Space Agency's ENVISAT. MERIS will monitor changes in phytoplankton across Earth's oceans and seas, both for the purpose of managing fisheries and conducting global change research. NASA scientists will use data from this European instrument in the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) program. The mission of SIMBIOS is to construct a consistent long-term dataset of ocean color (phytoplankton abundance) measurements made by multiple satellite instruments, including the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and the Moderate-Resolution Imaging Spectroradiometer (MODIS). For more information about MERIS and ENVISAT, visit the ENVISAT home page. Image copyright European Space Agency

  2. How the wet side of NOAA (NMFS and NOS) is using JPSS data

    NASA Astrophysics Data System (ADS)

    Wilson, C.

    2016-12-01

    The VIIRS (Visible Infrared Imaging Radiometer Suite) instrument on the JPSS satellite, launched in 2011, is the most recent of a series of US ocean-color satellite measurements. With the launch of VIIRS we now have a nineteen-year continuous time-series of ocean-color measurements, starting with SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), launched in 1997, and followed by MODIS (Moderate Resolution Imaging Spectroradiometer) on the Aqua satellite that was launched in 2002. What is significant about the VIIRS launch is that it represents a transition from ocean-color satellite data being generated from research missions launched by NASA to an operational data-stream that NOAA has responsibility for. In this presentation I will present a broad array of projects that will demonstrate how NOS (National Ocean Service) and NMFS (National Marine Fisheries Service) are using VIIRS data, both ocean-color and sea-surface temperature. Since fisheries and ecosystems studies typically require long time series, on the order of years to decades, the utility of the VIIRS data is that it has been intercalibrated with legacy data-streams to provide a climate data record. The majority of the projects highlighted were developed as part of the NOAA ocean satellite course that has been conducted annually since 2005.

  3. Three-dimensional intraoperative ultrasound of vascular malformations and supratentorial tumors.

    PubMed

    Woydt, Michael; Horowski, Anja; Krauss, Juergen; Krone, Andreas; Soerensen, Niels; Roosen, Klaus

    2002-01-01

    The benefits and limits of a magnetic sensor-based 3-dimensional (3D) intraoperative ultrasound technique during surgery of vascular malformations and supratentorial tumors were evaluated. Twenty patients with 11 vascular malformations and 9 supratentorial tumors undergoing microsurgical resection or clipping were investigated with an interactive magnetic sensor data acquisition system allowing freehand scanning. An ultrasound probe with a mounted sensor was used after craniotomies to localize lesions, outline tumors or malformation margins, and identify supplying vessels. A 3D data set was obtained allowing reformation of multiple slices in all 3 planes and comparison to 2-dimensional (2D) intraoperative ultrasound images. Off-line gray-scale segmentation analysis allowed differentiation between tissue with different echogenicities. Color-coded information about blood flow was extracted from the images with a reconstruction algorithm. This allowed photorealistic surface displays of perfused tissue, tumor, and surrounding vessels. Three-dimensional intraoperative ultrasound data acquisition was obtained within 5 minutes. Off-line analysis and reconstruction time depends on the type of imaging display and can take up to 30 minutes. The spatial relation between aneurysm sac and surrounding vessels or the skull base could be enhanced in 3 out of 6 aneurysms with 3D intraoperative ultrasound. Perforating arteries were visible in 3 cases only by using 3D imaging. 3D ultrasound provides a promising imaging technique, offering the neurosurgeon an intraoperative spatial orientation of the lesion and its vascular relationships. Thereby, it may improve safety of surgery and understanding of 2D ultrasound images.

  4. A Monitoring System for Laying Hens That Uses a Detection Sensor Based on Infrared Technology and Image Pattern Recognition.

    PubMed

    Zaninelli, Mauro; Redaelli, Veronica; Luzi, Fabio; Bontempo, Valentino; Dell'Orto, Vittorio; Savoini, Giovanni

    2017-05-24

    In Italy, organic egg production farms use free-range housing systems with a big outdoor area and a flock of no more than 500 hens. With additional devices and/or farming procedures, the whole flock could be forced to stay in the outdoor area for a limited time of the day. As a consequence, ozone treatments of housing areas could be performed in order to reduce the levels of atmospheric ammonia and bacterial load without risks, due by its toxicity, both for hens and workers. However, an automatic monitoring system, and a sensor able to detect the presence of animals, would be necessary. For this purpose, a first sensor was developed but some limits, related to the time necessary to detect a hen, were observed. In this study, significant improvements, for this sensor, are proposed. They were reached by an image pattern recognition technique that was applied to thermografic images acquired from the housing system. An experimental group of seven laying hens was selected for the tests, carried out for three weeks. The first week was used to set-up the sensor. Different templates, to use for the pattern recognition, were studied and different floor temperature shifts were investigated. At the end of these evaluations, a template of elliptical shape, and sizes of 135 × 63 pixels, was chosen. Furthermore, a temperature shift of one degree was selected to calculate, for each image, a color background threshold to apply in the following field tests. Obtained results showed an improvement of the sensor detection accuracy that reached values of sensitivity and specificity of 95.1% and 98.7%. In addition, the range of time necessary to detect a hen, or classify a case, was reduced at two seconds. This result could allow the sensor to control a bigger area of the housing system. Thus, the resulting monitoring system could allow to perform the sanitary treatments without risks both for animals and humans.

  5. A Monitoring System for Laying Hens That Uses a Detection Sensor Based on Infrared Technology and Image Pattern Recognition

    PubMed Central

    Zaninelli, Mauro; Redaelli, Veronica; Luzi, Fabio; Bontempo, Valentino; Dell’Orto, Vittorio; Savoini, Giovanni

    2017-01-01

    In Italy, organic egg production farms use free-range housing systems with a big outdoor area and a flock of no more than 500 hens. With additional devices and/or farming procedures, the whole flock could be forced to stay in the outdoor area for a limited time of the day. As a consequence, ozone treatments of housing areas could be performed in order to reduce the levels of atmospheric ammonia and bacterial load without risks, due by its toxicity, both for hens and workers. However, an automatic monitoring system, and a sensor able to detect the presence of animals, would be necessary. For this purpose, a first sensor was developed but some limits, related to the time necessary to detect a hen, were observed. In this study, significant improvements, for this sensor, are proposed. They were reached by an image pattern recognition technique that was applied to thermografic images acquired from the housing system. An experimental group of seven laying hens was selected for the tests, carried out for three weeks. The first week was used to set-up the sensor. Different templates, to use for the pattern recognition, were studied and different floor temperature shifts were investigated. At the end of these evaluations, a template of elliptical shape, and sizes of 135 × 63 pixels, was chosen. Furthermore, a temperature shift of one degree was selected to calculate, for each image, a color background threshold to apply in the following field tests. Obtained results showed an improvement of the sensor detection accuracy that reached values of sensitivity and specificity of 95.1% and 98.7%. In addition, the range of time necessary to detect a hen, or classify a case, was reduced at two seconds. This result could allow the sensor to control a bigger area of the housing system. Thus, the resulting monitoring system could allow to perform the sanitary treatments without risks both for animals and humans. PMID:28538654

  6. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the histograms of each band’s digital-number population within each map tile throughout the corridor and the determination of the digital numbers corresponding to the lower and upper one percent of the picture-element population within each map tile. Visual examination of the image tiles that were given a 1-percent stretch (whereby the lower 1- percent 12-bit digital number is assigned an 8-bit value of zero and the upper 1-percent 12-bit digital number is assigned an 8-bit value of 255) indicated that this stretch sufficiently removed atmospheric scattering, which provided improved image clarity and true natural colors for all surface materials. The lower and upper 1-percent, 12-bit digital numbers for each wavelength-band image in the image tiles exhibit erratic variations along the river corridor; the variations exhibited similar trends in both the lower and upper 1-percent digital numbers for all four wavelength-band images (figs. 2–5). The erratic variations are attributed to (1) daily variations in atmospheric water-vapor content due to monsoonal storms, (2) variations in channel water color due to variable sediment input from tributaries, and (3) variations in the amount of topographic shadows within each image tile, in which reflectance is dominated by atmospheric scattering. To make the surface colors of the stretched, 8-bit images consistent among adjacent image tiles, it was necessary to average both the lower and upper 1-percent digital values for each wavelength-band image over 20 river miles to subdue the erratic variations. The average lower and upper 1-percent digital numbers for each image tile (figs. 2–5) were used to convert the 12-bit image values to 8-bit values and the resulting 8-bit four-band images were stored as natural-color (red, green, and blue wavelength bands) and color-infrared (near-infrared, red, and green wavelength bands) images in embedded geotiff format, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. All image data are projected in the State Plane (SP) map projection using the central Arizona zone (202) and the North American Datum of 1983 (NAD83). The map-tile scheme used to segment the corridor image mosaic followed the standard USGS quarter-quadrangle (QQ) map borders, but the high resolution (20 cm) of the images required further quarter segmentation (QQQ) of the standard QQ tiles, where the image mosaic covered a large fraction of a QQ map tile (segmentation shown in (figure 6), where QQ_1 to QQ_4 shows the number convention used to designate a quarter of a QQ tile). To minimize the size of each image tile, each image or map tile was subset to only include that part of the tile that had image data. In addition, some QQQ image tiles within a QQ tile were combined when adjacent QQQ map tiles were small. Thus, some image tiles consist of combinations of QQQ map tiles, some consist of an entire QQ map tile, and some consist of two adjoining QQ map tiles. The final image tiles number 143, which is a large number of files to list on the Internet for both the natural-color and color-infrared images. Thus, the image tiles were placed in seven file folders based on the one-half-degree geographic boundaries within the study area (fig. 7). The map tiles in each file folder were compressed to minimize folder size for more efficient downloading. The file folders are sequentially referred to as zone 1 through zone 7, proceeding down river (fig. 7). The QQ designations of the image tiles contained within each folder or zone are shown on the index map for each respective zone (figs. 8–14).

  7. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    NASA Astrophysics Data System (ADS)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.

  8. A skin-integrated transparent and stretchable strain sensor with interactive color-changing electrochromic displays.

    PubMed

    Park, Heun; Kim, Dong Sik; Hong, Soo Yeong; Kim, Chulmin; Yun, Jun Yeong; Oh, Seung Yun; Jin, Sang Woo; Jeong, Yu Ra; Kim, Gyu Tae; Ha, Jeong Sook

    2017-06-08

    In this study, we report on the development of a stretchable, transparent, and skin-attachable strain sensor integrated with a flexible electrochromic device as a human skin-inspired interactive color-changing system. The strain sensor consists of a spin-coated conductive nanocomposite film of poly(vinyl alcohol)/multi-walled carbon nanotube/poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) on a polydimethylsiloxane substrate. The sensor exhibits excellent performance of high sensitivity, high durability, fast response, and high transparency. An electrochromic device (ECD) made of electrochemically synthesized polyaniline nanofibers and V 2 O 5 on an indium-tin-oxide-coated polyethylene terephthalate film experiences a change in color from yellow to dark blue on application of voltage. The strain sensor and ECD are integrated on skin via an Arduino circuit for an interactive color change with the variation of the applied strain, which enables a real-time visual display of body motion. This integrated system demonstrates high potential for use in interactive wearable devices, military applications, and smart robots.

  9. Diurnal changes in ocean color sensed in satellite imagery

    NASA Astrophysics Data System (ADS)

    Arnone, Robert; Vandermuelen, Ryan; Soto, Inia; Ladner, Sherwin; Ondrusek, Michael; Yang, Haoping

    2017-07-01

    Measurements of diurnal changes in ocean color in turbid coastal regions in the Gulf of Mexico were characterized using above water spectral radiometry from a National Aeronautics and Space Administration (aerosol robotic network-WaveCIS CSI-06) site that can provide 8 to 10 observations per day. Satellite capability to detect diurnal changes in ocean color was characterized using hourly overlapping afternoon orbits of the visual infrared imaging radiometer suite (VIIRS) Suomi National Polar-orbiting Partnership ocean color sensor and validated with in situ observations. The monthly cycle of diurnal changes was investigated for different water masses using VIIRS overlaps. Results showed the capability of satellite observations to monitor hourly color changes in coastal regions that can be impacted by vertical movement of optical layers, in response to tides, resuspension, and river plume dispersion. The spatial variability of VIIRS diurnal changes showed the occurrence and displacement of phytoplankton blooming and decaying processes. The diurnal change in ocean color was above 20%, which represents a 30% change in chlorophyll-a. Seasonal changes in diurnal ocean color for different water masses suggest differences in summer and winter responses to surface processes. The diurnal changes observed using satellite ocean color can be used to define the following: surface processes associated with biological activity, vertical changes in optical depth, and advection of water masses.

  10. Visual optics: an engineering approach

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2010-11-01

    The human eyes' visual system interprets the information from the visible light in order to build a representation of the world surrounding the body. It derives color by comparing the responses to light from the three types of photoreceptor cones in the eyes. These long medium and short cones are sensitive to blue, green and red portions of the visible spectrum. We simulate the color vision for the normal eyes. We see the effects of the dyes, filters, glasses and windows on color perception when the test image is illuminated with the D65 light sources. In addition to colors' perception, the human eyes can suffer from diseases and disorders. The eye can be seen as an optical instrument which has its own eye print. We present aspects of some nowadays methods and technologies which can capture and correct the human eyes' wavefront aberrations. We focus our attention to Siedel aberrations formula, Zenike polynomials, Shack-Hartmann Sensor, LASIK, interferograms fringes aberrations and Talbot effect.

  11. Laminar Soot Processes

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Image of soot (smoke) plume made for the Laminar Soot Processes (LSP) experiment during the Microgravity Sciences Lab-1 mission in 1997. LSP-2 will fly in the STS-107 Research 1 mission in 2002. The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner, similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a temperature sensor, and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.

  12. Non-iridescent Transmissive Structural Color Filter Featuring Highly Efficient Transmission and High Excitation Purity

    PubMed Central

    Shrestha, Vivek Raj; Lee, Sang-Shin; Kim, Eun-Soo; Choi, Duk-Yong

    2014-01-01

    Nanostructure based color filtering has been considered an attractive replacement for current colorant pigmentation in the display technologies, in view of its increased efficiencies, ease of fabrication and eco-friendliness. For such structural filtering, iridescence relevant to its angular dependency, which poses a detrimental barrier to the practical development of high performance display and sensing devices, should be mitigated. We report on a non-iridescent transmissive structural color filter, fabricated in a large area of 76.2 × 25.4 mm2, taking advantage of a stack of three etalon resonators in dielectric films based on a high-index cavity in amorphous silicon. The proposed filter features a high transmission above 80%, a high excitation purity of 0.93 and non-iridescence over a range of 160°, exhibiting no significant change in the center wavelength, dominant wavelength and excitation purity, which implies no change in hue and saturation of the output color. The proposed structure may find its potential applications to large-scale display and imaging sensor systems. PMID:24815530

  13. Advances in radiometry for ocean color

    USGS Publications Warehouse

    Brown, S.W.; Clark, D.K.; Johnson, B.C.; Yoon, H.; Lykke, K.R.; Flora, S.J.; Feinholz, M.E.; Souaidia, N.; Pietras, C.; Stone, T.C.; Yarbrough, M.A.; Kim, Y.S.; Barnes, R.A.; Mueller, J.L.

    2004-01-01

    We have presented a number of recent developments in radiometry that directly impact the uncertainties achievable in ocean-color research. Specifically, a new (2000) U. S. national irradiance scale, a new LASER-based facility for irradiance and radiance responsivity calibrations, and applications of the LASER facility for the calibration of sun photometers and characterization of spectrographs were discussed. For meaningful long-time-series global chlorophyll-a measurements, all instruments involved in radiometric measurements, including satellite sensors, vicarious calibration sensors, sensors used in the development of bio-optical algorithms and atmospheric characterization need to be fully characterized and corrected for systematic errors, including, but not limited to, stray light. A unique, solid-state calibration source is under development to reduce the radiometric uncertainties in ocean color instruments, in particular below 400 nm. Lunar measurements for trending of on-orbit sensor channel degradation were described. Unprecedented assessments, within 0.1 %, of temporal stability and drift in a satellite sensor's radiance responsivity are achievable with this approach. These developments advance the field of ocean color closer to the desired goal of reducing the uncertainty in the fundamental radiometry to a small component of the overall uncertainty in the derivation of remotely sensed ocean-color data products such as chlorophyll a.

  14. Validation of Leaf Area Index measurements based on the Wireless Sensor Network platform

    NASA Astrophysics Data System (ADS)

    Song, Q.; Li, X.; Liu, Q.

    2017-12-01

    The leaf area index (LAI) is one of the important parameters for estimating plant canopy function, which has significance for agricultural analysis such as crop yield estimation and disease evaluation. The quick and accurate access to acquire crop LAI is particularly vital. In the study, LAI measurement of corn crops is mainly through three kinds of methods: the leaf length and width method (LAILLW), the instruments indirect measurement method (LAII) and the leaf area index sensor method(LAIS). Among them, LAI value obtained from LAILLW can be regarded as approximate true value. LAI-2200,the current widespread LAI canopy analyzer,is used in LAII. LAIS based on wireless sensor network can realize the automatic acquisition of crop images,simplifying the data collection work,while the other two methods need person to carry out field measurements.Through the comparison of LAIS and other two methods, the validity and reliability of LAIS observation system is verified. It is found that LAI trend changes are similar in three methods, and the rate of change of LAI has an increase with time in the first two months of corn growth when LAIS costs less manpower, energy and time. LAI derived from LAIS is more accurate than LAII in the early growth stage,due to the small blade especially under the strong light. Besides, LAI processed from a false color image with near infrared information is much closer to the true value than true color picture after the corn growth period up to one and half months.

  15. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Balkhab mineral district in Afghanistan: Chapter B in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Balkhab mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Balkhab) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Balkhab area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Balkhab study area, one subarea was designated for detailed field investigations (that is, the Balkhab Prospect subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.

  17. MODIS Views North Pole

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This true-color image over the North Pole was acquired by the MODerate-resolution Imaging Spectroradiometer (MODIS), flying aboard the Terra spacecraft, on May 5, 2000. The scene was received and processed by Norway's MODIS Direct Broadcast data receiving station, located in Svalbard, within seconds of photons hitting the sensor's detectors. (Click for more details about MODIS Direct Broadcast data.) In this image, the sea ice appears white and areas of open water, or recently refrozen sea surface, appear black. The irregular whitish shapes toward the bottom of the image are clouds, which are often difficult to distinguish from the white Arctic surface. Notice the considerable number of cracks, or 'leads,' in the ice that appear as dark networks of lines. Throughout the region within the Arctic Circle leads are continually opening and closing due to the direction and intensity of shifting wind and ocean currents. Leads are particularly common during the summer, when temperatures are higher and the ice is thinner. In this image, each pixel is one square kilometer. Such true-color views of the North Pole are quite rare, as most of the time much of the region within the Arctic Circle is cloaked in clouds. Image by Allen Lunsford, NASA GSFC Direct Readout Laboratory; Data courtesy Tromso receiving station, Svalbard, Norway

  18. A fluorescent colorimetric pH sensor and the influences of matrices on sensing performances

    PubMed Central

    Tian, Yanqing; Fuller, Emily; Klug, Summer; Lee, Fred; Su, Fengyu; Zhang, Liqiang; Chao, Shih-hui; Meldrum, Deirdre R.

    2013-01-01

    A fluorescent colorimetric pH sensor was developed by a polymerization of a monomeric fluorescein based green emitter (SM1) with a monomeric 2-dicyanomethylene-3-cyano-4,5,5-trimethyl-2,5-dihydrofuran derived red emitter (SM2) in poly(2-hydroxyethyl methacrylate)-co-polyacrylamide (PHEMA-co-PAM) matrices. Polymerized SM1 (PSM1) in the polymer matrices showed bright emissions at basic conditions and weak emissions at acidic conditions. Polymerized SM2 (PSM2) in the polymer matrices exhibited a vastly different response when compared to PSM1. The emissions of PSM2 are stronger under acidic conditions than those under basic conditions. When SM1 and SM2 were polymerized in the same polymer matrix, a dual emission sensor acting as a ratiometric pH sensor (PSM1,2) was successfully developed. Because the PSM1 and PSM2 exhibited different pH responses and separated emission windows, the changes in the emission colors were clearly observed in their dual color sensor of PSM1,2, which changed emission colors dramatically from green at pH 7 to red at pH 4, which was detected visually and/or by using a color camera under an excitation of 488 nm. In addition to the development of the dual color ratiometric pH sensor, we also studied the effects of different matrix compositions, crosslinkers, and charges on the reporting capabilities of the sensors (sensitivity and pKa). PMID:24078772

  19. A fluorescent colorimetric pH sensor and the influences of matrices on sensing performances.

    PubMed

    Tian, Yanqing; Fuller, Emily; Klug, Summer; Lee, Fred; Su, Fengyu; Zhang, Liqiang; Chao, Shih-Hui; Meldrum, Deirdre R

    2013-10-01

    A fluorescent colorimetric pH sensor was developed by a polymerization of a monomeric fluorescein based green emitter ( SM1 ) with a monomeric 2-dicyanomethylene-3-cyano-4,5,5-trimethyl-2,5-dihydrofuran derived red emitter ( SM2 ) in poly(2-hydroxyethyl methacrylate)- co -polyacrylamide (PHEMA-co-PAM) matrices. Polymerized SM1 ( PSM1 ) in the polymer matrices showed bright emissions at basic conditions and weak emissions at acidic conditions. Polymerized SM2 ( PSM2 ) in the polymer matrices exhibited a vastly different response when compared to PSM1 . The emissions of PSM2 are stronger under acidic conditions than those under basic conditions. When SM1 and SM2 were polymerized in the same polymer matrix, a dual emission sensor acting as a ratiometric pH sensor ( PSM1,2 ) was successfully developed. Because the PSM1 and PSM2 exhibited different pH responses and separated emission windows, the changes in the emission colors were clearly observed in their dual color sensor of PSM1,2 , which changed emission colors dramatically from green at pH 7 to red at pH 4, which was detected visually and/or by using a color camera under an excitation of 488 nm. In addition to the development of the dual color ratiometric pH sensor, we also studied the effects of different matrix compositions, crosslinkers, and charges on the reporting capabilities of the sensors (sensitivity and p K a ).

  20. Earth Observations taken by the Expedition 13 crew

    NASA Image and Video Library

    2006-05-02

    ISS013-E-13549 (2 May 2006) --- Washington, DC is featured in this image photographed by an Expedition 13 crewmember on the International Space Station. When the image was exposed, the orbital outpost was located over the western border of Maryland and West Virginia. The resolution and extent of the true-color, handheld image is similar to the 15-meter/pixel data obtained by sensors onboard the unmanned Landsat-7 and Terra satellites. This resolution is sufficient to capture the sunglint off the Capitol Building's dome. Other major landmarks that are visible include the Washington Monument, the Pentagon (bottom left, southwest of the Potomac River), and the Lincoln Memorial, along the northwest bank of the Potomac.

  1. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Katawas mineral district in Afghanistan: Chapter N in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Katawas mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©AXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Katawas) and the WGS84 datum. The final image mosaics are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Katawas study area, one subarea was designated for detailed field investigation (that is, the Gold subarea); this subarea was extracted from the area's image mosaic and is provided as a separate embedded geotiff image.

  2. Tsunami damage in Aceh Province, Sumatra

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The island of Sumatra suffered from both the rumblings of the submarine earthquake and the tsunamis that were generated on December 26, 2004. Within minutes of the quake, the sea surged ashore, bringing destruction to the coasts of northern Sumatra. This pair of natural-color images from Landsat 7's Enhanced Thematic Mapper Plus (ETM+) instrument shows a small area along the Sumatran coast in Aceh province where the tsunami smashed its way ashore. In this region, the wave cut a swath of near-total destruction 1.5 kilometers (roughly one mile) in most places, but penetrating farther in many others. Some of these deeper paths of destruction can be seen especially dramatically in the larger-area ETM+ images linked to above. (North is up in these larger images.) ETM+ collects data at roughly 30 meter resolution, complimenting sensors like NASA's MODIS (onboard both Terra and Aqua satellites) which observed this area at 250-meter resolution to give a wide view and ultra-high-resolution sensors like Space Imaging's IKONOS, which observed the same region at 4-meter resolution to give a detailed, smaller-area view. NASA images created by Jesse Allen, Earth Observatory, using data provided courtesy of the Landsat 7 Science Project Office

  3. Data annotation, recording and mapping system for the US open skies aircraft

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, B.W.; Goede, W.F.; Farmer, R.G.

    1996-11-01

    This paper discusses the system developed by Northrop Grumman for the Defense Nuclear Agency (DNA), US Air Force, and the On-Site Inspection Agency (OSIA) to comply with the data annotation and reporting provisions of the Open Skies Treaty. This system, called the Data Annotation, Recording and Mapping System (DARMS), has been installed on the US OC-135 and meets or exceeds all annotation requirements for the Open Skies Treaty. The Open Skies Treaty, which will enter into force in the near future, allows any of the 26 signatory countries to fly fixed wing aircraft with imaging sensors over any of themore » other treaty participants, upon very short notice, and with no restricted flight areas. Sensor types presently allowed by the treaty are: optical framing and panoramic film cameras; video cameras ranging from analog PAL color television cameras to the more sophisticated digital monochrome and color line scanning or framing cameras; infrared line scanners; and synthetic aperture radars. Each sensor type has specific performance parameters which are limited by the treaty, as well as specific annotation requirements which must be achieved upon full entry into force. DARMS supports U.S. compliance with the Opens Skies Treaty by means of three subsystems: the Data Annotation Subsytem (DAS), which annotates sensor media with data obtained from sensors and the aircraft`s avionics system; the Data Recording System (DRS), which records all sensor and flight events on magnetic media for later use in generating Treaty mandated mission reports; and the Dynamic Sensor Mapping Subsystem (DSMS), which provides observers and sensor operators with a real-time moving map displays of the progress of the mission, complete with instantaneous and cumulative sensor coverages. This paper will describe DARMS and its subsystems in greater detail, along with the supporting avionics sub-systems. 7 figs.« less

  4. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the North Takhar mineral district in Afghanistan: Chapter D in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Takhar mineral district, which has placer gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for North Takhar) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the North Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  5. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Baghlan mineral district in Afghanistan: Chapter P in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Baghlan mineral district, which has industrial clay and gypsum deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2006, 2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Baghlan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Baghlan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  6. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Uruzgan mineral district in Afghanistan: Chapter V in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Uruzgan mineral district, which has tin and tungsten deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Uruzgan) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Uruzgan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the South Helmand mineral district in Afghanistan: Chapter O in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Helmand mineral district, which has travertine deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for South Helmand) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the South Helmand area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Bakhud mineral district in Afghanistan: Chapter U in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Bakhud mineral district, which has industrial fluorite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Bakhud) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Bakhud area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  9. Integrated High Resolution Digital Color Light Sensor in 130 nm CMOS Technology

    PubMed Central

    Strle, Drago; Nahtigal, Uroš; Batistell, Graciele; Zhang, Vincent Chi; Ofner, Erwin; Fant, Andrea; Sturm, Johannes

    2015-01-01

    This article presents a color light detection system integrated in 130 nm CMOS technology. The sensors and corresponding electronics detect light in a CIE XYZ color luminosity space using on-chip integrated sensors without any additional process steps, high-resolution analog-to-digital converter, and dedicated DSP algorithm. The sensor consists of a set of laterally arranged integrated photodiodes that are partly covered by metal, where color separation between the photodiodes is achieved by lateral carrier diffusion together with wavelength-dependent absorption. A high resolution, hybrid, ∑∆ ADC converts each photo diode’s current into a 22-bit digital result, canceling the dark current of the photo diodes. The digital results are further processed by the DSP, which calculates normalized XYZ or RGB color and intensity parameters using linear transformations of the three photo diode responses by multiplication of the data with a transformation matrix, where the coefficients are extracted by training in combination with a pseudo-inverse operation and the least-mean square approximation. The sensor system detects the color light parameters with 22-bit accuracy, consumes less than 60 μA on average at 10 readings per second, and occupies approx. 0.8 mm2 of silicon area (including three photodiodes and the analog part of the ADC). The DSP is currently implemented on FPGA. PMID:26205275

  10. Photonic Crystal Structures with Tunable Structure Color as Colorimetric Sensors

    PubMed Central

    Wang, Hui; Zhang, Ke-Qin

    2013-01-01

    Colorimetric sensing, which transduces environmental changes into visible color changes, provides a simple yet powerful detection mechanism that is well-suited to the development of low-cost and low-power sensors. A new approach in colorimetric sensing exploits the structural color of photonic crystals (PCs) to create environmentally-influenced color-changeable materials. PCs are composed of periodic dielectrics or metallo-dielectric nanostructures that affect the propagation of electromagnetic waves (EM) by defining the allowed and forbidden photonic bands. Simultaneously, an amazing variety of naturally occurring biological systems exhibit iridescent color due to the presence of PC structures throughout multi-dimensional space. In particular, some kinds of the structural colors in living organisms can be reversibly changed in reaction to external stimuli. Based on the lessons learned from natural photonic structures, some specific examples of PCs-based colorimetric sensors are presented in detail to demonstrate their unprecedented potential in practical applications, such as the detections of temperature, pH, ionic species, solvents, vapor, humidity, pressure and biomolecules. The combination of the nanofabrication technique, useful design methodologies inspired by biological systems and colorimetric sensing will lead to substantial developments in low-cost, miniaturized and widely deployable optical sensors. PMID:23539027

  11. A color management system for multi-colored LED lighting

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen; Corell, Dennis D.; Dam-Hansen, Carsten

    2015-09-01

    A new color control system is described and implemented for a five-color LED light engine, covering a wide white gamut. The system combines a new way of using pre-calibrated lookup tables and a rule-based optimization of chromaticity distance from the Planckian locus with a calibrated color sensor. The color sensor monitors the chromaticity of the mixed light providing the correction factor for the current driver by using the generated lookup table. The long term stability and accuracy of the system will be experimentally investigated with target tolerance within a circle radius of 0.0013 in the uniform chromaticity diagram (CIE1976).

  12. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Takhar mineral district in Afghanistan: Chapter Q in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Takhar mineral district, which has industrial evaporite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Takhar) and the WGS84 datum. The final image mosaics for the Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  13. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kunduz mineral district in Afghanistan: Chapter S in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kunduz mineral district, which has celestite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kunduz) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kunduz area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  14. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Tourmaline mineral district in Afghanistan: Chapter J in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Tourmaline mineral district, which has tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Tourmaline) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Tourmaline area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  15. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dudkash mineral district in Afghanistan: Chapter R in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dudkash mineral district, which has industrial mineral deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Dudkash) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dudkash area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Parwan mineral district in Afghanistan: Chapter CC in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Parwan mineral district, which has gold and copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006, 2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Parwan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni2 mineral district in Afghanistan: Chapter EE in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni2 mineral district, which has spectral reflectance anomalies indicative of gold, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni2) and the WGS84 datum. The images for the Ghazni2 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  18. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni1 mineral district in Afghanistan: Chapter DD in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni1 mineral district, which has spectral reflectance anomalies indicative of clay, aluminum, gold, silver, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni1) and the WGS84 datum. The images for the Ghazni1 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  19. Early On-Orbit Performance of the Visible Infrared Imaging Radiometer Suite Onboard the Suomi National Polar-Orbiting Partnership (S-NPP) Satellite

    NASA Technical Reports Server (NTRS)

    Cao, Changyong; DeLuccia, Frank J.; Xiong, Xiaoxiong; Wolfe, Robert; Weng, Fuzhong

    2014-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) is one of the key environmental remote-sensing instruments onboard the Suomi National Polar-Orbiting Partnership spacecraft, which was successfully launched on October 28, 2011 from the Vandenberg Air Force Base, California. Following a series of spacecraft and sensor activation operations, the VIIRS nadir door was opened on November 21, 2011. The first VIIRS image acquired signifies a new generation of operational moderate resolution-imaging capabilities following the legacy of the advanced very high-resolution radiometer series on NOAA satellites and Terra and Aqua Moderate-Resolution Imaging Spectroradiometer for NASA's Earth Observing system. VIIRS provides significant enhancements to the operational environmental monitoring and numerical weather forecasting, with 22 imaging and radiometric bands covering wavelengths from 0.41 to 12.5 microns, providing the sensor data records for 23 environmental data records including aerosol, cloud properties, fire, albedo, snow and ice, vegetation, sea surface temperature, ocean color, and nigh-time visible-light-related applications. Preliminary results from the on-orbit verification in the postlaunch check-out and intensive calibration and validation have shown that VIIRS is performing well and producing high-quality images. This paper provides an overview of the onorbit performance of VIIRS, the calibration/validation (cal/val) activities and methodologies used. It presents an assessment of the sensor initial on-orbit calibration and performance based on the efforts from the VIIRS-SDR team. Known anomalies, issues, and future calibration efforts, including the long-term monitoring, and intercalibration are also discussed.

  20. Monitoring of hourly variations in coastal water turbidity using the geostationary ocean color imager (GOCI)

    NASA Astrophysics Data System (ADS)

    Choi, J.; Ryu, J.

    2011-12-01

    Temporal variations of suspended sediment concentration (SSC) in coastal water are the key to understanding the pattern of sediment movement within coastal area, in particular, such as in the west coast of the Korean Peninsula which is influenced by semi-diurnal tides. Remote sensing techniques can effectively monitor the distribution and dynamic changes in seawater properties across wide areas. Thus, SSC on the sea surface has been investigated using various types of satellite-based sensors. An advantage of Geostationary Ocean Color Imager (GOCI), the world's first geostationary ocean color observation satellite, over other ocean color satellite images is that it can obtain data every hour during the day and makes it possible to monitor the ocean in real time. In this study, hourly variations in turbidity on the coastal waters were estimated quantitatively using GOCI. Thirty three water samples were obtained on the coastal water surface in southern Gyeonggi Bay, located on the west coast of Korea. Water samples were filtered using 25-mm glass fiber filters (GF/F) for the estimation of SSC. The radiometric characteristics of the surface water, such as the total water-leaving radiance (LwT, W/m2/nm/sr), the sky radiance (Lsky, W/m2/nm/sr) and the downwelling irradiance, were also measured at each sampling location. In situ optical properties of the surface water were converted into remote sensing reflectance (Rrs) and then were used to develop an algorithm to generate SSC images in the study area. GOCI images acquired on the same day as the samples acquisition were used to generate the map of turbidity and to estimate the difference in SSC displayed in each image. The estimation of the time-series variation in SSC in a coastal, shallow-water area affected by tides was successfully achieved using GOCI data that had been acquired at hourly intervals during the daytime.

  1. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  2. Using Satellite Lightning Data as a Hands-On Activity for a Broad Audience

    NASA Astrophysics Data System (ADS)

    Sinclair, L.; Smith, T.; Smith, D. K.; Weigel, A. M.; Bugbee, K.; Leach, C.

    2017-12-01

    Satellite lightning data archived at the NASA Global Hydrology Resource Center Distributed Active Archive Center (GHRC DAAC) captures the number of lightning flashes occurring within four by four kilometer pixels around the world from January 1998 through October 2014. These data were measured by the Lightning Imaging Sensor (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite. As an outreach effort to educate other on the use lightning measurements, the GHRC DAAC developed an interactive color-by-number poster showing accumulated lightning flashes around the world. As participants color the poster it reveals regions of maximum lightning flash counts across the Earth, including Lake Maracaibo in Catatumbo, Venezuela and a region in Congo, Africa. This hands-on activity is a bright, colorful, and inviting way to bring lightning data to a broad audience and can be used for people of many ages, including elementary-aged audiences up to adults.

  3. Magnetic-graphitic-nanocapsule templated diacetylene assembly and photopolymerization for sensing and multicoded anti-counterfeiting

    NASA Astrophysics Data System (ADS)

    Nie, Xiang-Kun; Xu, Yi-Ting; Song, Zhi-Ling; Ding, Ding; Gao, Feng; Liang, Hao; Chen, Long; Bian, Xia; Chen, Zhuo; Tan, Weihong

    2014-10-01

    Molecular self-assembly, a process to design molecular entities to aggregate into desired structures, represents a promising bottom-up route towards precise construction of functional systems. Here we report a multifunctional, self-assembled system based on magnetic-graphitic-nanocapsule (MGN) templated diacetylene assembly and photopolymerization. The as-prepared assembly system maintains the unique color and fluorescence change properties of the polydiacetylene (PDA) polymers, while also pursues the superior Raman, NIR, magnetic and superconducting properties from the MGN template. Based on both fluorescence and magnetic resonance imaging (MRI) T2 relaxivity, the MGN@PDA system could efficiently monitor the pH variations which could be used as a pH sensor. The MGN@PDA system further demonstrates potential as unique ink for anti-counterfeiting applications. Reversible color change, strong and unique Raman scattering and fluorescence emission, sensitive NIR thermal response, and distinctive magnetic properties afford this assembly system with multicoded anti-counterfeiting capabilities.Molecular self-assembly, a process to design molecular entities to aggregate into desired structures, represents a promising bottom-up route towards precise construction of functional systems. Here we report a multifunctional, self-assembled system based on magnetic-graphitic-nanocapsule (MGN) templated diacetylene assembly and photopolymerization. The as-prepared assembly system maintains the unique color and fluorescence change properties of the polydiacetylene (PDA) polymers, while also pursues the superior Raman, NIR, magnetic and superconducting properties from the MGN template. Based on both fluorescence and magnetic resonance imaging (MRI) T2 relaxivity, the MGN@PDA system could efficiently monitor the pH variations which could be used as a pH sensor. The MGN@PDA system further demonstrates potential as unique ink for anti-counterfeiting applications. Reversible color change, strong and unique Raman scattering and fluorescence emission, sensitive NIR thermal response, and distinctive magnetic properties afford this assembly system with multicoded anti-counterfeiting capabilities. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03837a

  4. Spinoff 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Image-Capture Devices Extend Medicine's Reach; Medical Devices Assess, Treat Balance Disorders; NASA Bioreactors Advance Disease Treatments; Robotics Algorithms Provide Nutritional Guidelines; "Anti-Gravity" Treadmills Speed Rehabilitation; Crew Management Processes Revitalize Patient Care; Hubble Systems Optimize Hospital Schedules; Web-based Programs Assess Cognitive Fitness; Electrolyte Concentrates Treat Dehydration; Tools Lighten Designs, Maintain Structural Integrity; Insulating Foams Save Money, Increase Safety; Polyimide Resins Resist Extreme Temperatures; Sensors Locate Radio Interference; Surface Operations Systems Improve Airport Efficiency; Nontoxic Resins Advance Aerospace Manufacturing; Sensors Provide Early Warning of Biological Threats; Robot Saves Soldier's Lives Overseas (MarcBot); Apollo-Era Life Raft Saves Hundreds of Sailors; Circuits Enhance Scientific Instruments and Safety Devices; Tough Textiles Protect Payloads and Public Safety Officers; Forecasting Tools Point to Fishing Hotspots; Air Purifiers Eliminate Pathogens, Preserve Food; Fabrics Protect Sensitive Skin from UV Rays; Phase Change Fabrics Control Temperature; Tiny Devices Project Sharp, Colorful Images; Star-Mapping Tools Enable Tracking of Endangered Animals; Nanofiber Filters Eliminate Contaminants; Modeling Innovations Advance Wind Energy Industry; Thermal Insulation Strips Conserve Energy; Satellite Respondent Buoys Identify Ocean Debris; Mobile Instruments Measure Atmospheric Pollutants; Cloud Imagers Offer New Details on Earth's Health; Antennas Lower Cost of Satellite Access; Feature Detection Systems Enhance Satellite Imagery; Chlorophyll Meters Aid Plant Nutrient Management; Telemetry Boards Interpret Rocket, Airplane Engine Data; Programs Automate Complex Operations Monitoring; Software Tools Streamline Project Management; Modeling Languages Refine Vehicle Design; Radio Relays Improve Wireless Products; Advanced Sensors Boost Optical Communication, Imaging; Tensile Fabrics Enhance Architecture Around the World; Robust Light Filters Support Powerful Imaging Devices; Thermoelectric Devices Cool, Power Electronics; Innovative Tools Advance Revolutionary Weld Technique; Methods Reduce Cost, Enhance Quality of Nanotubes; Gauging Systems Monitor Cryogenic Liquids; Voltage Sensors Monitor Harmful Static; and Compact Instruments Measure Heat Potential.

  5. A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies

    NASA Astrophysics Data System (ADS)

    Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.

    2015-12-01

    Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.

  6. GOCI image enhancement using an MTF compensation technique for coastal water applications.

    PubMed

    Oh, Eunsong; Choi, Jong-Kuk

    2014-11-03

    The Geostationary Ocean Color Imager (GOCI) is the first optical sensor in geostationary orbit for monitoring the ocean environment around the Korean Peninsula. This paper discusses on-orbit modulation transfer function (MTF) estimation with the pulse-source method and its compensation results for the GOCI. Additionally, by analyzing the relationship between the MTF compensation effect and the accuracy of the secondary ocean product, we confirmed the optimal MTF compensation parameter for enhancing image quality without variation in the accuracy. In this study, MTF assessment was performed using a natural target because the GOCI system has a spatial resolution of 500 m. For MTF compensation with the Wiener filter, we fitted a point spread function with a Gaussian curve controlled by a standard deviation value (σ). After a parametric analysis for finding the optimal degradation model, the σ value of 0.4 was determined to be an optimal indicator. Finally, the MTF value was enhanced from 0.1645 to 0.2152 without degradation of the accuracy of the ocean color product. Enhanced GOCI images by MTF compensation are expected to recognize small-scale ocean products in coastal areas with sharpened geometric performance.

  7. Intersatellite comparisons and evaluations of three ocean color products along the Zhejiang coast, eastern China

    NASA Astrophysics Data System (ADS)

    Cui, Qiyuan; Wang, Difeng; Gong, Fang; Pan, Delu; Hao, Zengzhou; Wang, Tianyu; Zhu, Qiankun

    2017-10-01

    With its broad spatial coverage and fine temporal resolution, ocean color remote sensing data represents an effective tool for monitoring large areas of ocean, and has the potential to provide crucial information in coastal waters where routine monitoring is either lacking or unsatisfactory. The semi-analytical or empirical algorithms that work well in Case 1 waters encounter many problems in offshore areas where the water is often optically complex and presents difficulties for atmospheric correction. Zhejiang is one of the most developed provinces in eastern China, and its adjacent seas have been greatly affected by recent rapid economic development. Various islands and semi-closed bays along the Zhejiang coast promote the formation of muddy tidal flats. Moreover, large quantities of terrestrial substances coming down with the Yangtze River and other local rivers also have a great impact on the coastal waters of the province. MODIS, VIIRS and GOCI are three commonly used ocean color sensors covering the East China Sea. Several ocean color products such as remote-sensing reflectance (Rrs) and the concentrations of chlorophyll a (Chl-a) and total suspended matter (TSM) of the above three sensors on the Zhejiang coast have been evaluated. Cloud-free satellite images with synchronous field measurements taken between 2012 and 2015 were used for comparison. It is shown that there is a good correlation between the MODIS and GOCI spectral data, while some outliers were found in the VIIRS images. The low signal-to-noise ratio at short wavelengths in highly turbid waters also reduced the correlation between different sensors. In addition, it was possible to obtain more valid data with GOCI in shallow waters because of the use of an appropriate atmospheric correction algorithm. The standard Chl-a and TSM products of the three satellites were also evaluated, and it was found that the Chl-a and TSM concentrations calculated by the OC3G and Case 2 algorithms, respectively, were more suitable for use in the study area. Moreover, GOCI has been proved to be effective for monitoring the diurnal dynamics in coastal waters, and the concentration of TSM had a good negative correlation with water level. Overall, compared with MODIS and VIIRS, GOCI is more effective for monitoring the fine changes and diurnal dynamics in the seas adjacent to Zhejiang Province.

  8. Small-Molecule Fluorescent Sensors for Investigating Zinc Metalloneurochemistry

    PubMed Central

    Nolan, Elizabeth M.; Lippard, Stephen J.

    2008-01-01

    Conspectus Metal ions are involved in many neurobiological processes relevant to human health and disease. The metalloneurochemistry of Zn(II) is of substantial current interest. Zinc is the second most abundant d-block metal ion in the human brain and its distribution varies, with relatively high concentrations found in the hippocampus. Brain zinc is generally divided into two categories: protein-bound and loosely-bound. The latter pool is also referred to as histochemically observable, chelatable, labile, or mobile zinc. The neurophysiological and neuropathological significance of such mobile Zn(II) remains enigmatic. Studies of Zn(II) distribution, translocation, and function in vivo require tools for its detection. Because Zn(II) has a closed-shell d10 configuration and no convenient spectroscopic signature, fluorescence is a suitable method for monitoring Zn(II) in biological contexts. This Account summarizes work by our laboratory addressing the design, preparation, characterization, and use of small-molecule fluorescent sensors for imaging mobile Zn(II) in living cells and samples of brain tissue. These sensors provide “turn-on” or ratiometric Zn(II) detection in aqueous solution at neutral pH. By making alterations to the Zn(II)-binding unit and fluorophore platform, we have devised sensors with varied photophysical and metal-binding properties. We used several of these probes to image Zn(II) distribution, uptake, and mobilization in a variety of cell types, including neuronal cultures. Goals for the future include developing strategies for multi-color imaging, further defining the quenching and turn-on mechanisms of the sensors, and employing the probes to elucidate the functional significance of Zn(II) in neurobiology. PMID:18989940

  9. Earth Observations taken by the Expedition 23 Crew

    NASA Image and Video Library

    2010-05-04

    ISS023-E-032397 (4 May 2010) --- The Gulf of Mexico oil spill is featured in this image photographed by an Expedition 23 crew member on the International Space Station. On April 20, 2010 the oil rig Deepwater Horizon suffered an explosion and sank two days later. Shortly thereafter oil began leaking into the Gulf of Mexico from ruptured pipes as safety cutoff mechanisms failed to operate. Automated nadir-viewing orbital NASA sensors have been tracking the growth of the oil spill as it has spread towards the northern Gulf Coast. This detailed photograph provides a different viewing perspective on the ongoing event. The image is oblique, meaning that it was taken with a sideways viewing angle from the space station, rather than the ?straight down? or nadir view typical of automated satellite sensors. The view is towards the west; the ISS was located over the eastern edge of the Gulf of Mexico when the image was taken. The Mississippi River Delta and nearby Louisiana coast (top) appear dark in the sunglint that illuminates most of the image. This phenomenon is caused by sunlight reflecting off the water surface ? much like a mirror ? directly back towards the astronaut observer onboard the orbital complex. The sunglint improves the identification of the oil spill (colored dark to light gray) which is creating a different water texture, and therefore a contrast, between the smooth and rougher water of the reflective ocean surface (colored silver to white). Wind and water current patterns have modified the oil spill?s original shape into streamers and elongated masses. Efforts are ongoing to contain the spill and protect fragile coastal ecosystems and habitats such as the Chandeleur Islands (right center). Other features visible in the image include a solid field of low cloud cover at the lower left corner of the image. A part of one of the ISS solar arrays is visible at lower right. Wave patterns at lower right are most likely caused by tidal effects.

  10. Bio-inspired nano-sensor-enhanced CNN visual computer.

    PubMed

    Porod, Wolfgang; Werblin, Frank; Chua, Leon O; Roska, Tamas; Rodriguez-Vazquez, Angel; Roska, Botond; Fay, Patrick; Bernstein, Gary H; Huang, Yih-Fang; Csurgay, Arpad I

    2004-05-01

    Nanotechnology opens new ways to utilize recent discoveries in biological image processing by translating the underlying functional concepts into the design of CNN (cellular neural/nonlinear network)-based systems incorporating nanoelectronic devices. There is a natural intersection joining studies of retinal processing, spatio-temporal nonlinear dynamics embodied in CNN, and the possibility of miniaturizing the technology through nanotechnology. This intersection serves as the springboard for our multidisciplinary project. Biological feature and motion detectors map directly into the spatio-temporal dynamics of CNN for target recognition, image stabilization, and tracking. The neural interactions underlying color processing will drive the development of nanoscale multispectral sensor arrays for image fusion. Implementing such nanoscale sensors on a CNN platform will allow the implementation of device feedback control, a hallmark of biological sensory systems. These biologically inspired CNN subroutines are incorporated into the new world of analog-and-logic algorithms and software, containing also many other active-wave computing mechanisms, including nature-inspired (physics and chemistry) as well as PDE-based sophisticated spatio-temporal algorithms. Our goal is to design and develop several miniature prototype devices for target detection, navigation, tracking, and robotics. This paper presents an example illustrating the synergies emerging from the convergence of nanotechnology, biotechnology, and information and cognitive science.

  11. Smartphone based visual and quantitative assays on upconversional paper sensor.

    PubMed

    Mei, Qingsong; Jing, Huarong; Li, You; Yisibashaer, Wuerzha; Chen, Jian; Nan Li, Bing; Zhang, Yong

    2016-01-15

    The integration of smartphone with paper sensors recently has been gain increasing attentions because of the achievement of quantitative and rapid analysis. However, smartphone based upconversional paper sensors have been restricted by the lack of effective methods to acquire luminescence signals on test paper. Herein, by the virtue of 3D printing technology, we exploited an auxiliary reusable device, which orderly assembled a 980nm mini-laser, optical filter and mini-cavity together, for digitally imaging the luminescence variations on test paper and quantitative analyzing pesticide thiram by smartphone. In detail, copper ions decorated NaYF4:Yb/Tm upconversion nanoparticles were fixed onto filter paper to form test paper, and the blue luminescence on it would be quenched after additions of thiram through luminescence resonance energy transfer mechanism. These variations could be monitored by the smartphone camera, and then the blue channel intensities of obtained colored images were calculated to quantify amounts of thiram through a self-written Android program installed on the smartphone, offering a reliable and accurate detection limit of 0.1μM for the system. This work provides an initial demonstration of integrating upconversion nanosensors with smartphone digital imaging for point-of-care analysis on a paper-based platform. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Visual gas sensors based on dye thin films and resonant waveguide gratings

    NASA Astrophysics Data System (ADS)

    Davoine, L.; Schnieper, M.; Barranco, A.; Aparicio, F. J.

    2011-05-01

    A colorimetric sensor that provides a direct visual indication of chemical contamination was developed. The detection is based on the color change of the reflected light after exposure to a gas or a liquid. The sensor is a combination of a chemically sensitive dye layer and a subwavelength grating structure. To enhance the perception of color change, a reference area sealed under a non-contaminated atmosphere is used and placed next to the sensor. The color change is clearly visible by human eyes. The device is based on photonic resonant effects; the visible color is a direct reflection of some incoming light, therefore no additional supplies are needed. This makes it usable as a standalone disposable sensor. The dye thin film is deposited by Plasma enhanced chemical vapor deposition (PECVD) on top of the subwavelength structure. The latter is made by combining a replication process of a Sol-Gel material and a thin film deposition. Lowcost fabrication and compatibility with environments where electricity cannot be used make this device very attractive for applications in hospitals, industries, with explosives and in traffic.

  13. Ocean Color Optical Property Data Derived from OCTS and POLDER: A Comparison Study

    NASA Technical Reports Server (NTRS)

    Wang, Menghua; Isaacman, Alice; Franz, Bryan A.; McClain, Charles R.; Zukor, Dorothy J. (Technical Monitor)

    2001-01-01

    We describe our efforts in studying and comparing the ocean color data derived from the Japanese Ocean Color and Temperature Scanner (OCTS) and the French Polarization and Directionality of the Earth's Reflectances (POLDER). OCTS and POLDER were both on board Japan's Sun-synchronous Advanced Earth Observing Satellite (ADEOS-1) from August 1996 to June 1997, collecting about 10 months of global ocean color data. This provides a unique opportunity for developing methods and strategies for the merging of ocean color data from multiple ocean color sensors. In this paper, we describe our approach in developing consistent data processing algorithms for both OCTS and POLDER and using a common in situ data set to vicariously calibrate the two sensors. Therefore, the OCTS and POLDER-measured radiances are effectively bridged through common in situ measurements. With this approach in processing data from two different sensors, the only differences in the derived products from OCTS and POLDER are the differences inherited from the instrument characteristics. Results show that there are no obvious bias differences between the OCTS and POLDER-derived ocean color products, whereas the differences due to noise, which stem from variations in sensor characteristics, are difficult to correct. It is possible, however, to reduce noise differences with some data averaging schemes. The ocean color data from OCTS and POLDER can therefore be compared and merged in the sense that there is no significant bias between two.

  14. Fiber optic systems for colorimetry and scattered colorimetry

    NASA Astrophysics Data System (ADS)

    Mignani, Anna G.; Mencaglia, Andrea A.; Ciaccheri, Leonardo

    2005-09-01

    An innovative series of optical fiber sensors based on spectroscopic interrogation is presented. The sensors are custom-designed for a wide range of applications, including gasoline colorimetry, chromium monitoring of sewage, museum lighting control, for use with a platform for interrogating an array of absorption-based chemical sensors, as well as for color and turbidity measurements. Two types of custom-design instrumentation have been developed, both making use of LED light sources and a low-cost optical fiber spectrometer to perform broadband spectral measurements in the visible spectral range. The first was designed especially to address color-based sensors, while the second assessed the combined color and turbidity of edible liquids such as olive oil. Both are potentially exploitable in other industrial and environmental applications.

  15. Fixation light hue bias revisited: implications for using adaptive optics to study color vision.

    PubMed

    Hofer, H J; Blaschke, J; Patolia, J; Koenig, D E

    2012-03-01

    Current vision science adaptive optics systems use near infrared wavefront sensor 'beacons' that appear as red spots in the visual field. Colored fixation targets are known to influence the perceived color of macroscopic visual stimuli (Jameson, D., & Hurvich, L. M. (1967). Fixation-light bias: An unwanted by-product of fixation control. Vision Research, 7, 805-809.), suggesting that the wavefront sensor beacon may also influence perceived color for stimuli displayed with adaptive optics. Despite its importance for proper interpretation of adaptive optics experiments on the fine scale interaction of the retinal mosaic and spatial and color vision, this potential bias has not yet been quantified or addressed. Here we measure the impact of the wavefront sensor beacon on color appearance for dim, monochromatic point sources in five subjects. The presence of the beacon altered color reports both when used as a fixation target as well as when displaced in the visual field with a chromatically neutral fixation target. This influence must be taken into account when interpreting previous experiments and new methods of adaptive correction should be used in future experiments using adaptive optics to study color. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Shadow detection and removal in RGB VHR images for land use unsupervised classification

    NASA Astrophysics Data System (ADS)

    Movia, A.; Beinat, A.; Crosilla, F.

    2016-09-01

    Nowadays, high resolution aerial images are widely available thanks to the diffusion of advanced technologies such as UAVs (Unmanned Aerial Vehicles) and new satellite missions. Although these developments offer new opportunities for accurate land use analysis and change detection, cloud and terrain shadows actually limit benefits and possibilities of modern sensors. Focusing on the problem of shadow detection and removal in VHR color images, the paper proposes new solutions and analyses how they can enhance common unsupervised classification procedures for identifying land use classes related to the CO2 absorption. To this aim, an improved fully automatic procedure has been developed for detecting image shadows using exclusively RGB color information, and avoiding user interaction. Results show a significant accuracy enhancement with respect to similar methods using RGB based indexes. Furthermore, novel solutions derived from Procrustes analysis have been applied to remove shadows and restore brightness in the images. In particular, two methods implementing the so called "anisotropic Procrustes" and the "not-centered oblique Procrustes" algorithms have been developed and compared with the linear correlation correction method based on the Cholesky decomposition. To assess how shadow removal can enhance unsupervised classifications, results obtained with classical methods such as k-means, maximum likelihood, and self-organizing maps, have been compared to each other and with a supervised clustering procedure.

  17. Cell phones as imaging sensors

    NASA Astrophysics Data System (ADS)

    Bhatti, Nina; Baker, Harlyn; Marguier, Joanna; Berclaz, Jérôme; Süsstrunk, Sabine

    2010-04-01

    Camera phones are ubiquitous, and consumers have been adopting them faster than any other technology in modern history. When connected to a network, though, they are capable of more than just picture taking: Suddenly, they gain access to the power of the cloud. We exploit this capability by providing a series of image-based personal advisory services. These are designed to work with any handset over any cellular carrier using commonly available Multimedia Messaging Service (MMS) and Short Message Service (SMS) features. Targeted at the unsophisticated consumer, these applications must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system (i.e., as a cloud service) and not on the handset itself. Presenting an image to an advisory service in the cloud, a user receives information that can be acted upon immediately. Two of our examples involve color assessment - selecting cosmetics and home décor paint palettes; the third provides the ability to extract text from a scene. In the case of the color imaging applications, we have shown that our service rivals the advice quality of experts. The result of this capability is a new paradigm for mobile interactions - image-based information services exploiting the ubiquity of camera phones.

  18. Decomposed Photo Response Non-Uniformity for Digital Forensic Analysis

    NASA Astrophysics Data System (ADS)

    Li, Yue; Li, Chang-Tsun

    The last few years have seen the applications of Photo Response Non-Uniformity noise (PRNU) - a unique stochastic fingerprint of image sensors, to various types of digital forensic investigations such as source device identification and integrity verification. In this work we proposed a new way of extracting PRNU noise pattern, called Decomposed PRNU (DPRNU), by exploiting the difference between the physical andartificial color components of the photos taken by digital cameras that use a Color Filter Array for interpolating artificial components from physical ones. Experimental results presented in this work have shown the superiority of the proposed DPRNU to the commonly used version. We also proposed a new performance metrics, Corrected Positive Rate (CPR) to evaluate the performance of the common PRNU and the proposed DPRNU.

  19. Spectrum slicer for snapshot spectral imaging

    NASA Astrophysics Data System (ADS)

    Tamamitsu, Miu; Kitagawa, Yutaro; Nakagawa, Keiichi; Horisaki, Ryoichi; Oishi, Yu; Morita, Shin-ya; Yamagata, Yutaka; Motohara, Kentaro; Goda, Keisuke

    2015-12-01

    We propose and demonstrate an optical component that overcomes critical limitations in our previously demonstrated high-speed multispectral videography-a method in which an array of periscopes placed in a prism-based spectral shaper is used to achieve snapshot multispectral imaging with the frame rate only limited by that of an image-recording sensor. The demonstrated optical component consists of a slicing mirror incorporated into a 4f-relaying lens system that we refer to as a spectrum slicer (SS). With its simple design, we can easily increase the number of spectral channels without adding fabrication complexity while preserving the capability of high-speed multispectral videography. We present a theoretical framework for the SS and its experimental utility to spectral imaging by showing real-time monitoring of a dynamic colorful event through five different visible windows.

  20. Hyacinths Choke the Rio Grande

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These images acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), flying aboard NASA's Terra satellite, demonstrate the potential of satellite-based remote sensors to monitor infestations of non-native plant species. These images show the vigorous growth of water hyacinths along a stretch of the Rio Grande River in Texas. The infestation had grown so dense in some places it was impeding the flow of water and rendered the river impassible for boats. The hyacinth is an aquatic weed native to South America. The plant is exotic looking and, when it blooms, the hyacinth produces a pretty purple flower, which is why it was introduced into North America. However, it has the capacity to grow and spread at astonishing rates so that in the wild it can completely clog the flow of rivers and waterways in a matter of days or weeks. The top image was acquired on March 30, 2002, and the bottom image on May 9, 2002. In the near-infrared region of the spectrum, photosynthetically-active vegetation is highly reflective. Consequently, vegetation appears bright to the near-infrared sensors aboard ASTER; and water, which absorbs near-infrared radiation, appears dark. In these false-color images produced from the sensor data, healthy vegetation is shown as bright red while water is blue or black. Notice a water hyacinth infestation is already apparent on March 30 near the center of the image. By May 9, the hyacinth population has exploded to cover more than half the river in the scene. Satellite-based remote sensors can enable scientists to monitor large areas of infestation like this one rather quickly and efficiently, which is particularly useful for regions that are difficult to reach from on the ground. (For more details, click to read Showdown in the Rio Grande.) Images courtesy Terrametrics; Data provided by the ASTER Science Team

  1. The Blue Marble

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This spectacular Moderate Resolution Imaging Spectroradiometer (MODIS) 'blue marble' image is based on the most detailed collection of true-color imagery of the entire Earth to date. Using a collection of satellite-based observations, scientists and visualizers stitched together months of observations of the land surface, oceans, sea ice, and clouds into a seamless, true-color mosaic of every square kilometer (.386 square mile) of our planet. Most of the information contained in this image came from MODIS, illustrating MODIS' outstanding capacity to act as an integrated tool for observing a variety of terrestrial, oceanic, and atmospheric features of the Earth. The land and coastal ocean portions of this image is based on surface observations collected from June through September 2001 and combined, or composited, every eight days to compensate for clouds that might block the satellite's view on any single day. Global ocean color (or chlorophyll) data was used to simulate the ocean surface. MODIS doesn't measure 3-D features of the Earth, so the surface observations were draped over topographic data provided by the U.S. Geological Survey EROS Data Center. MODIS observations of polar sea ice were combined with observations of Antarctica made by the National Oceanic and Atmospheric Administration's AVHRR sensor-the Advanced Very High Resolution Radiometer. The cloud image is a composite of two days of MODIS imagery collected in visible light wavelengths and a third day of thermal infra-red imagery over the poles. A large collection of imagery based on the blue marble in a variety of sizes and formats, including animations and the full (1 km) resolution imagery, is available at the Blue Marble page. Image by Reto Stockli, Render by Robert Simmon. Based on data from the MODIS Science Team

  2. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  3. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  4. Third-generation imaging sensor system concepts

    NASA Astrophysics Data System (ADS)

    Reago, Donald A.; Horn, Stuart B.; Campbell, James, Jr.; Vollmerhausen, Richard H.

    1999-07-01

    Second generation forward looking infrared sensors, based on either parallel scanning, long wave (8 - 12 um) time delay and integration HgCdTe detectors or mid wave (3 - 5 um), medium format staring (640 X 480 pixels) InSb detectors, are being fielded. The science and technology community is now turning its attention toward the definition of a future third generation of FLIR sensors, based on emerging research and development efforts. Modeled third generation sensor performance demonstrates a significant improvement in performance over second generation, resulting in enhanced lethality and survivability on the future battlefield. In this paper we present the current thinking on what third generation sensors systems will be and the resulting requirements for third generation focal plane array detectors. Three classes of sensors have been identified. The high performance sensor will contain a megapixel or larger array with at least two colors. Higher operating temperatures will also be the goal here so that power and weight can be reduced. A high performance uncooled sensor is also envisioned that will perform somewhere between first and second generation cooled detectors, but at significantly lower cost, weight, and power. The final third generation sensor is a very low cost micro sensor. This sensor can open up a whole new IR market because of its small size, weight, and cost. Future unattended throwaway sensors, micro UAVs, and helmet mounted IR cameras will be the result of this new class.

  5. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  6. Near-Infrared Coloring via a Contrast-Preserving Mapping Model.

    PubMed

    Chang-Hwan Son; Xiao-Ping Zhang

    2017-11-01

    Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.

  7. Airborne Mission Concept for Coastal Ocean Color and Ecosystems Research

    NASA Technical Reports Server (NTRS)

    Guild, Liane S.; Hooker, Stanford B.; Morrow, John H.; Kudela, Raphael M.; Palacios, Sherry L.; Torres Perez, Juan L.; Hayashi, Kendra; Dunagan, Stephen E.

    2016-01-01

    NASA airborne missions in 2011 and 2013 over Monterey Bay, CA, demonstrated novel above- and in-water calibration and validation measurements supporting a combined airborne sensor approach (imaging spectrometer, microradiometers, and a sun photometer). The resultant airborne data characterize contemporaneous coastal atmospheric and aquatic properties plus sea-truth observations from state-of-the-art instrument systems spanning a next-generation spectral domain (320-875 nm). This airborne instrument suite for calibration, validation, and research flew at the lowest safe altitude (ca. 100 ft or 30 m) as well as higher altitudes (e.g., 6,000 ft or 1,800 m) above the sea surface covering a larger area in a single synoptic sortie than ship-based measurements at a few stations during the same sampling period. Data collection of coincident atmospheric and aquatic properties near the sea surface and at altitude allows the input of relevant variables into atmospheric correction schemes to improve the output of corrected imaging spectrometer data. Specific channels support legacy and next-generation satellite capabilities, and flights are planned to within 30 min of satellite overpass. This concept supports calibration and validation activities of ocean color phenomena (e.g., river plumes, algal blooms) and studies of water quality and coastal ecosystems. The 2011 COAST mission flew at 100 and 6,000 ft on a Twin Otter platform with flight plans accommodating the competing requirements of the sensor suite, which included the Coastal-Airborne In-situ Radiometers (C-AIR) for the first time. C-AIR (Biospherical Instruments Inc.) also flew in the 2013 OCEANIA mission at 100 and 1,000 ft on the Twin Otter below the California airborne simulation of the proposed NASA HyspIRI satellite system comprised of an imaging spectrometer and thermal infrared multispectral imager on the ER-2 at 65,000 ft (20,000 m). For both missions, the Compact-Optical Profiling System (Biospherical Instruments, Inc.), an in-water system with microradiometers matching C-AIR, was deployed to compare sea-truth measurements and low-altitude Twin Otter flights within Monterey Bay red tide events. This novel airborne and in-water sensor capability advances the science of coastal measurements and enables rapid response for coastal events.

  8. Interactive digital image manipulation system

    NASA Technical Reports Server (NTRS)

    Henze, J.; Dezur, R.

    1975-01-01

    The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.

  9. Wavelet Analysis of SAR Images for Coastal Monitoring

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Wu, Sunny Y.; Tseng, William Y.; Pichel, William G.

    1998-01-01

    The mapping of mesoscale ocean features in the coastal zone is a major potential application for satellite data. The evolution of mesoscale features such as oil slicks, fronts, eddies, and ice edge can be tracked by the wavelet analysis using satellite data from repeating paths. The wavelet transform has been applied to satellite images, such as those from Synthetic Aperture Radar (SAR), Advanced Very High-Resolution Radiometer (AVHRR), and ocean color sensor for feature extraction. In this paper, algorithms and techniques for automated detection and tracking of mesoscale features from satellite SAR imagery employing wavelet analysis have been developed. Case studies on two major coastal oil spills have been investigated using wavelet analysis for tracking along the coast of Uruguay (February 1997), and near Point Barrow, Alaska (November 1997). Comparison of SAR images with SeaWiFS (Sea-viewing Wide Field-of-view Sensor) data for coccolithophore bloom in the East Bering Sea during the fall of 1997 shows a good match on bloom boundary. This paper demonstrates that this technique is a useful and promising tool for monitoring of coastal waters.

  10. The results of initial analysis of OSTA-1/Ocean Color Experiment (OCE) imagery

    NASA Technical Reports Server (NTRS)

    Kim, H. H.; Hart, W. D.

    1982-01-01

    Ocean view images from the Ocean Color Experiment (OCE) were produced at three widely separated locations on the Earth. Digital computer enhancement and band ratioing techniques were applied to radiometrically corrected OCE spectral data to emphasize patterns of chlorophyll distribution and, in one shallow, clear water case, bottom topography. The chlorophyll pattern in the Yellow Sea between China and Korea was evident in a scene produced from Shuttle Orbit 24. The effects of the discharge from the Yangtze and other rivers were also observed. Two scenes from orbits 30 and 32 revealed the movement of patches of plankton in the Gulf of Cadiz. Geometrical corrections to these images permitted the existing ocean current velocities in the vicinity to be deduced. The variability in water depth over the Grand Bahama Bank was estimated by using the blue-green OCE channel. The very clear water conditions in the area caused bottom reflected sunlight to produce a sensor signal which was related inversely to the depth of the water.

  11. Ganges River Delta

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Ganges River forms an extensive delta where it empties into the Bay of Bengal. The delta is largely covered with a swamp forest known as the Sunderbans, which is home to the Royal Bengal Tiger. It is also home to most of Bangladesh, one of the world's most densely populated countries. Roughly 120 million people live on the Ganges Delta under threat of repeated catastrophic floods due to heavy runoff of meltwater from the Himalayas, and due to the intense rainfall during the monsoon season. This image was acquired by Landsat 7's Enhanced Thematic Mapper plus (ETM+) sensor on February 28, 2000. This is a false-color composite image made using green, infrared, and blue wavelengths. Image provided by the USGS EROS Data Center Satellite Systems Branch

  12. WE-D-17A-02: Evaluation of a Two-Dimensional Optical Dosimeter On Measuring Lateral Profiles of Proton Pencil Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsi, W; Lee, T; Schultz, T

    Purpose: To evaluate the accuracy of a two-dimensional optical dosimeter on measuring lateral profiles for spots and scanned fields of proton pencil beams. Methods: A digital camera with a color image senor was utilized to image proton-induced scintillations on Gadolinium-oxysulfide phosphor reflected by a stainless-steel mirror. Intensities of three colors were summed for each pixel with proper spatial-resolution calibration. To benchmark this dosimeter, the field size and penumbra for 100mm square fields of singleenergy pencil-scan protons were measured and compared between this optical dosimeter and an ionization-chamber profiler. Sigma widths of proton spots in air were measured and compared betweenmore » this dosimeter and a commercial optical dosimeter. Clinical proton beams with ranges between 80 mm and 300 mm at CDH proton center were used for this benchmark. Results: Pixel resolutions vary 1.5% between two perpendicular axes. For a pencil-scan field with 302 mm range, measured field sizes and penumbras between two detection systems agreed to 0.5 mm and 0.3 mm, respectively. Sigma widths agree to 0.3 mm between two optical dosimeters for a proton spot with 158 mm range; having widths of 5.76 mm and 5.92 mm for X and Y axes, respectively. Similar agreements were obtained for others beam ranges. This dosimeter was successfully utilizing on mapping the shapes and sizes of proton spots at the technical acceptance of McLaren proton therapy system. Snow-flake spots seen on images indicated the image sensor having pixels damaged by radiations. Minor variations in intensity between different colors were observed. Conclusions: The accuracy of our dosimeter was in good agreement with other established devices in measuring lateral profiles of pencil-scan fields and proton spots. A precise docking mechanism for camera was designed to keep aligned optical path while replacing damaged image senor. Causes for minor variations between emitted color lights will be investigated.« less

  13. Nanohole-array-based device for 2D snapshot multispectral imaging

    PubMed Central

    Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.

    2013-01-01

    We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065

  14. Analysis of airborne imaging spectrometer data for the Ruby Mountains, Montana, by use of absorption-band-depth images

    NASA Technical Reports Server (NTRS)

    Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.

  15. Comparison of Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Earth Observing One (EO-1) Advanced Land Imager

    NASA Technical Reports Server (NTRS)

    Pedelty, Jeffrey A.; Morisette, Jeffrey T.; Smith, James A.

    2004-01-01

    We compare images from the Enhanced Thematic Mapper Plus (ETM+) sensor on Landsat-7 and the Advanced Land Imager (ALI) instrument on Earth Observing One (EO-1) over a test site in Rochester, New York. The site contains a variety of features, ranging from water of varying depths, deciduous/coniferous forest, and grass fields, to urban areas. Nearly coincident cloud-free images were collected one minute apart on 25 August 2001. We also compare images of a forest site near Howland, Maine, that were collected on 7 September, 2001. We atmospherically corrected each pair of images with the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) atmosphere model, using aerosol optical thickness and water vapor column density measured by in situ Cimel sun photometers within the Aerosol Robotic Network (AERONET), along with ozone density derived from the Total Ozone Mapping Spectrometer (TOMS) on the Earth Probe satellite. We present true-color composites from each instrument that show excellent qualitative agreement between the multispectral sensors, along with grey-scale images that demonstrate a significantly improved ALI panchromatic band. We quantitatively compare ALI and ETM+ reflectance spectra of a grassy field in Rochester and find < or equal to 6% differences in the visible/near infrared and approx. 2% differences in the short wave infrared. Spectral comparisons of forest sites in Rochester and Howland yield similar percentage agreement except for band 1, which has very low reflectance. Principal component analyses and comparison of normalized difference vegetation index histograms for each sensor indicate that the ALI is able to reproduce the information content in the ETM+ but with superior signal-to-noise performance due to its increased 12-bit quantization.

  16. Multifrequency Ultra-High Resolution Miniature Scanning Microscope Using Microchannel And Solid-State Sensor Technologies And Method For Scanning Samples

    NASA Technical Reports Server (NTRS)

    Wang, Yu (Inventor)

    2006-01-01

    A miniature, ultra-high resolution, and color scanning microscope using microchannel and solid-state technology that does not require focus adjustment. One embodiment includes a source of collimated radiant energy for illuminating a sample, a plurality of narrow angle filters comprising a microchannel structure to permit the passage of only unscattered radiant energy through the microchannels with some portion of the radiant energy entering the microchannels from the sample, a solid-state sensor array attached to the microchannel structure, the microchannels being aligned with an element of the solid-state sensor array, that portion of the radiant energy entering the microchannels parallel to the microchannel walls travels to the sensor element generating an electrical signal from which an image is reconstructed by an external device, and a moving element for movement of the microchannel structure relative to the sample. Discloses a method for scanning samples whereby the sensor array elements trace parallel paths that are arbitrarily close to the parallel paths traced by other elements of the array.

  17. Color visual simulation applications at the Defense Mapping Agency

    NASA Astrophysics Data System (ADS)

    Simley, J. D.

    1984-09-01

    The Defense Mapping Agency (DMA) produces the Digital Landmass System data base to provide culture and terrain data in support of numerous aircraft simulators. In order to conduct data base and simulation quality control and requirements analysis, DMA has developed the Sensor Image Simulator which can rapidly generate visual and radar static scene digital simulations. The use of color in visual simulation allows the clear portrayal of both landcover and terrain data, whereas the initial black and white capabilities were restricted in this role and thus found limited use. Color visual simulation has many uses in analysis to help determine the applicability of current and prototype data structures to better meet user requirements. Color visual simulation is also significant in quality control since anomalies can be more easily detected in natural appearing forms of the data. The realism and efficiency possible with advanced processing and display technology, along with accurate data, make color visual simulation a highly effective medium in the presentation of geographic information. As a result, digital visual simulation is finding increased potential as a special purpose cartographic product. These applications are discussed and related simulation examples are presented.

  18. Introduction to Color Imaging Science

    NASA Astrophysics Data System (ADS)

    Lee, Hsien-Che

    2005-04-01

    Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.

  19. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Nalbandon mineral district in Afghanistan: Chapter L in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nalbandon mineral district, which has lead and zinc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Nalbandon) and the WGS84 datum. The final image mosaics were subdivided into ten overlapping tiles or quadrants because of the large size of the target area. The ten image tiles (or quadrants) for the Nalbandon area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Nalbandon study area, two subareas were designated for detailed field investigations (that is, the Nalbandon District and Gharghananaw-Gawmazar subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  20. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Zarkashan mineral district in Afghanistan: Chapter G in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Zarkashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Zarkashan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Zarkashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Zarkashan study area, three subareas were designated for detailed field investigations (that is, the Mine Area, Bolo Gold Prospect, and Luman-Tamaki Gold Prospect subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  1. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kandahar mineral district in Afghanistan: Chapter Z in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kandahar mineral district, which has bauxite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar- elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image- registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative- reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area- enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Kandahar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kandahar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kandahar study area, two subareas were designated for detailed field investigations (that is, the Obatu-Shela and Sekhab-Zamto Kalay subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.

  2. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Khanneshin mineral district in Afghanistan: Chapter A in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Khanneshin mineral district, which has uranium, thorium, rare-earth-element, and apatite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Khanneshin) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Khanneshin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Khanneshin study area, one subarea was designated for detailed field investigations (that is, the Khanneshin volcano subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.

  3. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Panjsher Valley mineral district in Afghanistan: Chapter M in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Panjsher Valley mineral district, which has emerald and silver-iron deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2009, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Panjsher Valley) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Panjsher Valley area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Panjsher Valley study area, two subareas were designated for detailed field investigations (that is, the Emerald and Silver-Iron subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  4. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Farah mineral district in Afghanistan: Chapter FF in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Farah mineral district, which has spectral reflectance anomalies indicative of copper, zinc, lead, silver, and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Farah) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Farah area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Farah study area, five subareas were designated for detailed field investigations (that is, the FarahA through FarahE subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  5. Pseudo color ghost coding imaging with pseudo thermal light

    NASA Astrophysics Data System (ADS)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  6. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dusar-Shaida mineral district in Afghanistan: Chapter I in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dusar-Shaida mineral district, which has copper and tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’ picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’ local zone (41 for Dusar-Shaida) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dusar-Shaida area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Dusar-Shaida study area, three subareas were designated for detailed field investigations (that is, the Dahana-Misgaran, Kaftar VMS, and Shaida subareas); these subareas were extracted from the area’ image mosaic and are provided as separate embedded geotiff images.

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kundalyan mineral district in Afghanistan: Chapter H in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kundalyan mineral district, which has porphyry copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kundalyan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kundalyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kundalyan study area, three subareas were designated for detailed field investigations (that is, the Baghawan-Garangh, Charsu-Ghumbad, and Kunag Skarn subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Herat mineral district in Afghanistan: Chapter T in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Herat mineral district, which has barium and limestone deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Herat) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Herat area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Herat study area, one subarea was designated for detailed field investigations (that is, the Barium-Limestone subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.

  9. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Badakhshan mineral district in Afghanistan: Chapter F in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Badakhshan mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Badakhshan) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Badakhshan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Badakhshan study area, three subareas were designated for detailed field investigations (that is, the Bharak, Fayz-Abad, and Ragh subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  10. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kharnak-Kanjar mineral district in Afghanistan: Chapter K in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kharnak-Kanjar mineral district, which has mercury deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Kharnak-Kanjar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kharnak-Kanjar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kharnak-Kanjar study area, three subareas were designated for detailed field investigations (that is, the Koh-e-Katif Passaband, Panjshah-Mullayan, and Sahebdad-Khanjar subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.

  11. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Haji-Gak mineral district in Afghanistan: Chapter C in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Haji-Gak mineral district, which has iron ore deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then co-registered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image-coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Haji-Gak) and the WGS84 datum. The final image mosaics were subdivided into three overlapping tiles or quadrants because of the large size of the target area. The three image tiles (or quadrants) for the Haji-Gak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Haji-Gak study area, three subareas were designated for detailed field investigations (that is, the Haji-Gak Prospect, Farenjal, and NE Haji-Gak subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.

  12. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Aynak mineral district in Afghanistan: Chapter E in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Aynak mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Aynak) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Aynak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Aynak study area, five subareas were designated for detailed field investigations (that is, the Bakhel-Charwaz, Kelaghey-Kakhay, Kharuti-Dawrankhel, Logar Valley, and Yagh-Darra/Gul-Darra subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  13. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghunday-Achin mineral district in Afghanistan, in Davis, P.A, compiler, Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghunday-Achin mineral district, which has magnesite and talc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Ghunday-Achin) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Ghunday-Achin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Ghunday-Achin study area, two subareas were designated for detailed field investigations (that is, the Achin-Magnesite and Ghunday-Mamahel subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  14. Athena Microscopic Imager investigation

    NASA Astrophysics Data System (ADS)

    Herkenhoff, K. E.; Squyres, S. W.; Bell, J. F.; Maki, J. N.; Arneson, H. M.; Bertelsen, P.; Brown, D. I.; Collins, S. A.; Dingizian, A.; Elliott, S. T.; Goetz, W.; Hagerott, E. C.; Hayes, A. G.; Johnson, M. J.; Kirk, R. L.; McLennan, S.; Morris, R. V.; Scherr, L. M.; Schwochert, M. A.; Shiraishi, L. R.; Smith, G. H.; Soderblom, L. A.; Sohl-Dickstein, J. N.; Wadsworth, M. V.

    2003-11-01

    The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 × 31 mm across a 1024 × 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (~2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars.

  15. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  16. New Research Methods Developed for Studying Diabetic Foot Ulceration

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Dr. Brian Davis, one of the Cleveland Clinic Foundation's researchers, has been investigating the risk factors related to diabetic foot ulceration, a problem that accounts for 20 percent of all hospital admissions for diabetic patients. He had developed a sensor pad to measure the friction and pressure forces under a person's foot when walking. As part of NASA Lewis Research Center's Space Act Agreement with the Cleveland Clinic Foundation, Dr. Davis requested Lewis' assistance in visualizing the data from the sensor pad. As a result, Lewis' Interactive Data Display System (IDDS) was installed at the Cleveland Clinic. This computer graphics program is normally used to visualize the flow of air through aircraft turbine engines, producing color two- and three-dimensional images.

  17. Cheap DECAF: Density Estimation for Cetaceans from Acoustic Fixed Sensors Using Separate, Non-Linked Devices

    DTIC Science & Technology

    2015-09-30

    interpolation was used to estimate fin whale density in between the hydrophone locations , and the result plotted as a density image. This was repeated every 5...singing fin whale density throughout the year for the study location off Portugal. Color indicates whale density, with calibration scale at right; yellow...spots are hydrophone locations ; timeline at top indicates the time of year; circle at lower right is 1000 km 2 , the area used in the unit of whale

  18. Designing a Practical System for Spectral Imaging of Skylight

    DTIC Science & Technology

    2005-09-20

    Commission Interna- tionale de l’Éclairage), include CIELUV , CIELAB , CIE94, and CIEDE2000.21,22 These metrics quantify distances in their respective...thresholds for RMSE and CIEDE2000 metrics when searching for optimum sensors; Hernández-Andrés et al.1 used GFC, CIELUV , and IIE(%) in a similar way. As...once. We use GFC as a spectral metric, CIELAB as a col- orimetric cost function (denoted by E*ab, the dis- tance between two colors in the CIE’s uniform

  19. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  20. Appearance-based multimodal human tracking and identification for healthcare in the digital home.

    PubMed

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-08-05

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.

  1. Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

    PubMed Central

    Yang, Mau-Tsuen; Huang, Shen-Yen

    2014-01-01

    There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207

  2. Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea

    NASA Astrophysics Data System (ADS)

    Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.

    2018-04-01

    Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.

  3. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the South Bamyan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Bamyan mineral district, which has areas with a spectral reflectance anomaly that require field investigation. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for South Bamyan) and the WGS84 datum. The final image mosaics for the South Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  4. Space Radar Image of Patagonian Ice Fields

    NASA Image and Video Library

    1999-04-15

    This pair of images illustrates the ability of multi-parameter radar imaging sensors such as the Spaceborne Imaging Radar-C/X-band Synthetic Aperture radar to detect climate-related changes on the Patagonian ice fields in the Andes Mountains of Chile and Argentina. The images show nearly the same area of the south Patagonian ice field as it was imaged during two space shuttle flights in 1994 that were conducted five-and-a-half months apart. The images, centered at 49.0 degrees south latitude and 73.5degrees west longitude, include several large outlet glaciers. The images were acquired by SIR-C/X-SAR on board the space shuttle Endeavour during April and October 1994. The top image was acquired on April 14, 1994, at 10:46 p.m. local time, while the bottom image was acquired on October 5,1994, at 10:57 p.m. local time. Both were acquired during the 77th orbit of the space shuttle. The area shown is approximately 100 kilometers by 58 kilometers (62 miles by 36 miles) with north toward the upper right. The colors in the images were obtained using the following radar channels: red represents the C-band (horizontally transmitted and received); green represents the L-band (horizontally transmitted and received); blue represents the L-band (horizontally transmitted and vertically received). The overall dark tone of the colors in the central portion of the April image indicates that the interior of the ice field is covered with thick wet snow. The outlet glaciers, consisting of rough bare ice, are the brightly colored yellow and purple lobes which terminate at calving fronts into the dark waters of lakes and fiords. During the second mission the temperatures were colder and the corresponding change in snow and ice conditions is readily apparent by comparing the images. The interior of the ice field is brighter because of increased radar return from the dryer snow. The distinct green/orange boundary on the ice field indicates an abrupt change in the structure of the snowcap, a direct indication of the steep meteorological gradients known to exist in this region. The bluer color of the outlet glaciers is probably due to a thin snow cover. A portion of the terminus of the outlet glacier at the top left center of the images has advanced approximately 600 meters (1,970 feet) in the five-and-a-half months between the two missions. Because of the persistent cloud cover this observation was only possible by using the orbiting, remote imaging radar system. http://photojournal.jpl.nasa.gov/catalog/PIA01778

  5. Landsat multispectral sharpening using a sensor system model and panchromatic image

    USGS Publications Warehouse

    Lemeshewsky, G.P.; ,

    2003-01-01

    The thematic mapper (TM) sensor aboard Landsats 4, 5 and enhanced TM plus (ETM+) on Landsat 7 collect imagery at 30-m sample distance in six spectral bands. New with ETM+ is a 15-m panchromatic (P) band. With image sharpening techniques, this higher resolution P data, or as an alternative, the 10-m (or 5-m) P data of the SPOT satellite, can increase the spatial resolution of the multispectral (MS) data. Sharpening requires that the lower resolution MS image be coregistered and resampled to the P data before high spatial frequency information is transferred to the MS data. For visual interpretation and machine classification tasks, it is important that the sharpened data preserve the spectral characteristics of the original low resolution data. A technique was developed for sharpening (in this case, 3:1 spatial resolution enhancement) visible spectral band data, based on a model of the sensor system point spread function (PSF) in order to maintain spectral fidelity. It combines high-pass (HP) filter sharpening methods with iterative image restoration to reduce degradations caused by sensor-system-induced blurring and resembling. Also there is a spectral fidelity requirement: sharpened MS when filtered by the modeled degradations should reproduce the low resolution source MS. Quantitative evaluation of sharpening performance was made by using simulated low resolution data generated from digital color-IR aerial photography. In comparison to the HP-filter-based sharpening method, results for the technique in this paper with simulated data show improved spectral fidelity. Preliminary results with TM 30-m visible band data sharpened with simulated 10-m panchromatic data are promising but require further study.

  6. Software defined multi-spectral imaging for Arctic sensor networks

    NASA Astrophysics Data System (ADS)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.

  7. Internet Color Imaging

    DTIC Science & Technology

    2000-07-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1348 TITLE: Internet Color Imaging DISTRIBUTION: Approved for public...Paper Internet Color Imaging Hsien-Che Lee Imaging Science and Technology Laboratory Eastman Kodak Company, Rochester, New York 14650-1816 USA...ABSTRACT The sharing and exchange of color images over the Internet pose very challenging problems to color science and technology . Emerging color standards

  8. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  9. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  10. Utilizing typical color appearance models to represent perceptual brightness and colorfulness for digital images

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao

    2016-12-01

    This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.

  11. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.

  12. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.

  13. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  14. Earth as art 4

    USGS Publications Warehouse

    ,

    2016-03-29

    Landsat 8 is the latest addition to the long-running series of Earth-observing satellites in the Landsat program that began in 1972. The images featured in this fourth installment of the Earth As Art collection were all acquired by Landsat 8. They show our planet’s diverse landscapes with remarkable clarity.Landsat satellites see the Earth as no human can. Not only do they acquire images from the vantage point of space, but their sensors record infrared as well as visible wavelengths of light. The resulting images often reveal “hidden” details of the Earth’s land surface, making them invaluable for scientific research.As with previous Earth As Art exhibits, these Landsat images were selected solely for their aesthetic appeal. Many of the images have been manipulated to enhance color variations or details. They are not intended for scientific interpretation—only for your viewing pleasure. What do you see in these unique glimpses of the Earth’s continents, islands, and coastlines?

  15. The exploration of outer space with cameras: A history of the NASA unmanned spacecraft missions

    NASA Astrophysics Data System (ADS)

    Mirabito, M. M.

    The use of television cameras and other video imaging devices to explore the solar system's planetary bodies with unmanned spacecraft is chronicled. Attention is given to the missions and the imaging devices, beginning with the Ranger 7 moon mission, which featured the first successfully operated electrooptical subsystem, six television cameras with vidicon image sensors. NASA established a network of parabolic, ground-based antennas on the earth (the Deep Space Network) to receive signals from spacecraft travelling farther than 16,000 km into space. The image processing and enhancement techniques used to convert spacecraft data transmissions into black and white and color photographs are described, together with the technological requirements that drove the development of the various systems. Terrestrial applications of the planetary imaging systems are explored, including medical and educational uses. Finally, the implementation and functional characteristics of CCDs are detailed, noting their installation on the Space Telescope.

  16. Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma

    NASA Astrophysics Data System (ADS)

    Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira

    2013-02-01

    A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.

  17. Ultrathin metal-semiconductor-metal resonator for angle invariant visible band transmission filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kyu-Tae; Seo, Sungyong; Yong Lee, Jae

    We present transmission visible wavelength filters based on strong interference behaviors in an ultrathin semiconductor material between two metal layers. The proposed devices were fabricated on 2 cm × 2 cm glass substrate, and the transmission characteristics show good agreement with the design. Due to a significantly reduced light propagation phase change associated with the ultrathin semiconductor layer and the compensation in phase shift of light reflecting from the metal surface, the filters show an angle insensitive performance up to ±70°, thus, addressing one of the key challenges facing the previously reported photonic and plasmonic color filters. This principle, described in this paper, canmore » have potential for diverse applications ranging from color display devices to the image sensors.« less

  18. Influence of imaging resolution on color fidelity in digital archiving.

    PubMed

    Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari

    2015-11-01

    Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.

  19. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging

    PubMed Central

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-01-01

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging. PMID:27763555

  20. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging.

    PubMed

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-10-18

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging.

  1. Image subregion querying using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2002-01-01

    A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.

  2. Mobile micro-colorimeter and micro-spectrometer sensor modules as enablers for the replacement of subjective inspections by objective measurements for optically clear colored liquids in-field

    NASA Astrophysics Data System (ADS)

    Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich

    2015-03-01

    Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.

  3. Hdr Imaging for Feature Detection on Detailed Architectural Scenes

    NASA Astrophysics Data System (ADS)

    Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.

    2015-02-01

    3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  4. Rayleigh radiance computations for satellite remote sensing: accounting for the effect of sensor spectral response function.

    PubMed

    Wang, Menghua

    2016-05-30

    To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude lakes, and the same Rayleigh LUTs are applicable for all satellite sensors over the global ocean and inland waters. The new Rayleigh LUTs have been implemented in the VIIRS-SNPP ocean color data processing for routine production of global ocean color and inland water products.

  5. Multiple Auto-Adapting Color Balancing for Large Number of Images

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-04-01

    This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.

  6. System-level analysis and design for RGB-NIR CMOS camera

    NASA Astrophysics Data System (ADS)

    Geelen, Bert; Spooren, Nick; Tack, Klaas; Lambrechts, Andy; Jayapala, Murali

    2017-02-01

    This paper presents system-level analysis of a sensor capable of simultaneously acquiring both standard absorption based RGB color channels (400-700nm, 75nm FWHM), as well as an additional NIR channel (central wavelength: 808 nm, FWHM: 30nm collimated light). Parallel acquisition of RGB and NIR info on the same CMOS image sensor is enabled by monolithic pixel-level integration of both a NIR pass thin film filter and NIR blocking filters for the RGB channels. This overcomes the need for a standard camera-level NIR blocking filter to remove the NIR leakage present in standard RGB absorption filters from 700-1000nm. Such a camera-level NIR blocking filter would inhibit the acquisition of the NIR channel on the same sensor. Thin film filters do not operate in isolation. Rather, their performance is influenced by the system context in which they operate. The spectral distribution of light arriving at the photo diode is shaped a.o. by the illumination spectral profile, optical component transmission characteristics and sensor quantum efficiency. For example, knowledge of a low quantum efficiency (QE) of the CMOS image sensor above 800nm may reduce the filter's blocking requirements and simplify the filter structure. Similarly, knowledge of the incoming light angularity as set by the objective lens' F/# and exit pupil location may be taken into account during the thin film's optimization. This paper demonstrates how knowledge of the application context can facilitate filter design and relax design trade-offs and presents experimental results.

  7. A commercialized, continuous flow fiber optic sensor for trichloroethylene and haloforms

    NASA Technical Reports Server (NTRS)

    Wells, James C.; Johnson, Mark D.

    1994-01-01

    Purus, Inc. has commercialized a fiber optic chemical sensor using technology developed by Lawrence Livermore National Laboratory and licensed from The University of California. The basis for the sensor is the development of color within a reagent when exposed to an analyte. The sensor consists of an optrode, reagent delivery and recover system, fiber optic transmitter-receiver, controller, and display. Reagent is pumped through the optrode. Analyte diffuses across a gas permeable membrane and reacts with the reagent to form a colored product. The colored product is detected by measuring the absorbance of light from a 568 nm diode. Reagents are currently available for TCE and trihalomethanes. Initial reagent chemistry is based on the Fujiwara alkaline pyridine reaction. The optrode contacts only gas streams, but the volatility of the current analytes also allows measurements of aqueous streams, without being affected by aqueous interferents that are non-volatile. Sensitivity of the sensor has been demonstrated to 5 ppb aqueous solutions and 0.1 ppmv in flowing gas streams.

  8. Color standardization in whole slide imaging using a color calibration slide

    PubMed Central

    Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako

    2014-01-01

    Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739

  9. Automatic color preference correction for color reproduction

    NASA Astrophysics Data System (ADS)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  10. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  11. Applications of Sentinel-2 data for agriculture and forest monitoring using the absolute difference (ZABUD) index derived from the AgroEye software (ESA)

    NASA Astrophysics Data System (ADS)

    de Kok, R.; WeŻyk, P.; PapieŻ, M.; Migo, L.

    2017-10-01

    To convince new users of the advantages of the Sentinel_2 sensor, a simplification of classic remote sensing tools allows to create a platform of communication among domain specialists of agricultural analysis, visual image interpreters and remote sensing programmers. An index value, known in the remote sensing user domain as "Zabud" was selected to represent, in color, the essentials of a time series analysis. The color index used in a color atlas offers a working platform for an agricultural field control. This creates a database of test and training areas that enables rapid anomaly detection in the agricultural domain. The use cases and simplifications now function as an introduction to Sentinel_2 based remote sensing, in an area that before relies on VHR imagery and aerial data, to serve mainly the visual interpretation. The database extension with detected anomalies allows developers of open source software to design solutions for further agricultural control with remote sensing.

  12. Constituting fully integrated visual analysis system for Cu(II) on TiO₂/cellulose paper.

    PubMed

    Li, Shun-Xing; Lin, Xiaofeng; Zheng, Feng-Ying; Liang, Wenjie; Zhong, Yanxue; Cai, Jiabai

    2014-07-15

    As a cheap and abundant porous material, cellulose filter paper was used to immobilize nano-TiO2 and denoted as TiO2/cellulose paper (TCP). With high adsorption capacity for Cu(II) (more than 1.65 mg), TCP was used as an adsorbent, photocatalyst, and colorimetric sensor at the same time. Under the optimum adsorption conditions, i.e., pH 6.5 and 25 °C, the adsorption ratio of Cu(II) was higher than 96.1%. Humic substances from the matrix could be enriched onto TCP but the interference of their colors on colorimetric detection could be eliminated by the photodegradation. In the presence of hydroxylamine, neocuproine, as a selective indicator, was added onto TCP, and a visual color change from white to orange was generated. The concentration of Cu(II) was quantified by the color intensity images using image processing software. This fully integrated visual analysis system was successfully applied for the detection of Cu(II) in 10.0 L of drinking water and seawater with a preconcentration factor of 10(4). The log-linear calibration curve for Cu(II) was in the range of 0.5-50.0 μg L(-1) with a determination coefficient (R(2)) of 0.985 and its detection limit was 0.073 μg L(-1).

  13. Monitoring of mountain glaciers of some regions of gissar-alai mountain system using aster space images

    NASA Astrophysics Data System (ADS)

    Batirov, R.; Yakovlev, A.

    In 1999 the TERRA orbital platform was launched. It is intended for space monitoring of various natural objects on a surface of the Earth and in particular of glaciers. Onboard the orbital platform the Japanese sensor ASTER was installed. Characteristics of the sensor give unique possibility for monitoring glaciers from the space. In the given work the cataloguing of glaciers of some river basins of Alai, Turkestan and Zeravshan ranges of Gissar--Alai mountain system, which in turn is a part of Pamir--Alai mountain system, was fulfilled. In particular, the cataloguing of glaciers of Shahimardan, Sokh, Isfara river basins, and also the basin of Zeravshan glacier was fulfilled. Thematic processing of the images was implemented for the range of the images on the date of the survey -- second half of August 2001--2002 years. The images were granted in the framework of Aster Research Opportunity Scheme (ARO) of Japanese space agency ERSDAC (``Monitoring of mountain glaciers and glacial lakes using ASTER space images'', contract AP-0290). Previous data of glaciation of this region were obtained as per 1957 and 1980 with application of materials of aerial photography (1957) and analogue space images (1980). The ASTER sensor makes survey an earth surface in 14 bands of a spectrum of electromagnetic waves radiated by the Sun -- from the visible up to the thermal infrared. Thus the following three bands are optimal for extraction of glaciological information: Band 1 (visible green) -- 0.52-0.60 microns; Band 2 (visible red) -- 0.63-0.69 microns; Band 3N (short-range infrared) -- 0.78-0.86 microns. The spatial resolution of these bands is 15 m, and radiometric resolution is 8 bits. Such geometrical and radiometric resolutions provide acceptable accuracy of definition of glaciers. At composition of the computer image in a pseudo-color, the red color was correlated with the band1, the green with the band 2 and the dark blue with the band 3N. Such selection of the bands gives the best combination of colors for recognition of the glaciers. According to data for 2001 the aggregate area of the glaciers of Gissar-Alai study region amounted to 482.5 km2. In 1980 and 1957 years the aggregate area of the glaciers of these basins was 511.4 and 572.0 km2, accordingly. In spite of global climate warming which occurs from the middle of 20 century and till the present time, there is a fact that for period from 1980 to 2001 years the mean annual rates of degradation of the glaciation are, approximately, on two times lower than for the period from 1957 to 1980 years, 0.27 % per a year and 0.46 % per a year, accordingly. The prevalent climatic situation in the second half of 20 century appears extremely unfavorable for existence of glaciation of the Gissar-Alai and as a whole for the Pamir--Alai. For last 45 years the glaciers of the study river basins lost about 16 % of the initial area.

  14. Achieving Global Ocean Color Climate Data Records

    NASA Technical Reports Server (NTRS)

    Franz, Bryan

    2010-01-01

    Ocean color, or the spectral distribution of visible light upwelling from beneath the ocean surface, carries information on the composition and concentration of biological constituents within the water column. The CZCS mission in 1978 demonstrated that quantitative ocean color measurements could be. made from spaceborne sensors, given sufficient corrections for atmospheric effects and a rigorous calibration and validation program. The launch of SeaWiFS in 1997 represents the beginning of NASA's ongoing efforts to develop a continuous ocean color data record with sufficient coverage and fidelity for global change research. Achievements in establishing and maintaining the consistency of the time-series through multiple missions and varying instrument designs will be highlighted in this talk, including measurements from NASA'S MODIS instruments currently flying on the Terra and Aqua platforms, as well as the MERIS sensor flown by ESA and the OCM-2 sensor recently launched by ISRO.

  15. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  16. A new visual feedback-based magnetorheological haptic master for robot-assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Choi, Seung-Hyun; Kim, Soomin; Kim, Pyunghwa; Park, Jinhyuk; Choi, Seung-Bok

    2015-06-01

    In this study, we developed a novel four-degrees-of-freedom haptic master using controllable magnetorheological (MR) fluid. We also integrated the haptic master with a vision device with image processing for robot-assisted minimally invasive surgery (RMIS). The proposed master can be used in RMIS as a haptic interface to provide the surgeon with a sense of touch by using both kinetic and kinesthetic information. The slave robot, which is manipulated with a proportional-integrative-derivative controller, uses a force sensor to obtain the desired forces from tissue contact, and these desired repulsive forces are then embodied through the MR haptic master. To verify the effectiveness of the haptic master, the desired force and actual force are compared in the time domain. In addition, a visual feedback system is implemented in the RMIS experiment to distinguish between the tumor and organ more clearly and provide better visibility to the operator. The hue-saturation-value color space is adopted for the image processing since it is often more intuitive than other color spaces. The image processing and haptic feedback are realized on surgery performance. In this work, tumor-cutting experiments are conducted under four different operating conditions: haptic feedback on, haptic feedback off, image processing on, and image processing off. The experimental realization shows that the performance index, which is a function of pixels, is different in the four operating conditions.

  17. Strain-sensitive upconversion for imaging biological forces (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lay, Alice; Wisser, Michael; Lin, Yu; Narayan, Tarun; Krieg, Michael; Atre, Ashwin; Goodman, Miriam; Dionne, Jennifer A.

    2016-09-01

    Nearly all diseases can be traced back to abnormal mechanotransduction, but few sensors can reliably measure biologically-relevant forces in vivo. Here, we investigate sub-25nm lanthanide-doped upconverting nanoparticles as novel optical force probes, which provide several biocompatible features: sharp emission peaks with near infrared illumination, a high signal-to-noise ratio, and photostability. To increase force sensitivity, we include d-metal doping in the nanoparticles; the d-metal siphons energy from the lanthanide ions with an efficiency that varies with pressure. We synthesize cubic-phase NaYF4: Er3+,Yb3+ nanoparticles doped with 0-5% Mn2+ and compress them in a hydrostatic environment using a diamond anvil cell. When illuminated at 980nm, the nanoparticles show sharp emission peaks centered at wavelengths of 522nm, 545nm, and 660nm. In 20nN increments, up to 700nN, the ratio of the red-to-green peaks in 0% Mn-doped nanoparticles increases by nearly 30%, resulting in a perceived color change from orange to red. In contrast, the 1% Mn-doped samples exhibit little color change but a large 40% decrease in upconversion intensity. In both cases, the red-to-green ratio varies linearly with strain and the optical properties are recoverable upon release. We further use atomic force microscopy to characterize optical responses at lower, pico-Newton to nano-Newton forces. To demonstrate in vivo imaging capabilities, we incubate C. elegans with nanoparticles dispersed in buffer solution (5mg/mL concentration) and image forces involved in digestion using confocal microscopy. Our nanoparticles provide a platform for the first, non-genetically-encoded in vivo force sensors, and we describe routes to increase their sensitivity to the single-pN range.

  18. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    PubMed

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  19. 1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second

    NASA Astrophysics Data System (ADS)

    Glenn, William E.; Marcinka, John W.

    1998-09-01

    For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.

  20. Bending strength measurements at different materials used for IR-cut filters in mobile camera devices

    NASA Astrophysics Data System (ADS)

    Dietrich, Volker; Hartmann, Peter; Kerz, Franca

    2015-03-01

    Digital cameras are present everywhere in our daily life. Science, business or private life cannot be imagined without digital images. The quality of an image is often rated by its color rendering. In order to obtain a correct color recognition, a near infrared cut (IRC-) filter must be used to alter the sensitivity of imaging sensor. Increasing requirements related to color balance and larger angle of incidence (AOI) enforced the use of new materials as the e.g. BG6X series which substitutes interference coated filters on D263 thin glass. Although the optical properties are the major design criteria, devices have to withstand numerous environmental conditions during use and manufacturing - as e.g. temperature change, humidity, and mechanical shock, as wells as mechanical stress. The new materials show different behavior with respect to all these aspects. They are usually more sensitive against these requirements to a larger or smaller extent. Mechanical strength is especially different. Reliable strength data are of major interest for mobile phone camera applications. As bending strength of a glass component depends not only upon the material itself, but mainly on the surface treatment and test conditions, a single number for the strength might be misleading if the conditions of the test and the samples are not described precisely,. Therefore, Schott started investigations upon the bending strength data of various IRC-filter materials. Different test methods were used to obtain statistical relevant data.

Top