Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
Adjustment of multi-CCD-chip-color-camera heads
NASA Astrophysics Data System (ADS)
Guyenot, Volker; Tittelbach, Guenther; Palme, Martin
1999-09-01
The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.
3D Rainbow Particle Tracking Velocimetry
NASA Astrophysics Data System (ADS)
Aguirre-Pablo, Andres A.; Xiong, Jinhui; Idoughi, Ramzi; Aljedaani, Abdulrahman B.; Dun, Xiong; Fu, Qiang; Thoroddsen, Sigurdur T.; Heidrich, Wolfgang
2017-11-01
A single color camera is used to reconstruct a 3D-3C velocity flow field. The camera is used to record the 2D (X,Y) position and colored scattered light intensity (Z) from white polyethylene tracer particles in a flow. The main advantage of using a color camera is the capability of combining different intensity levels for each color channel to obtain more depth levels. The illumination system consists of an LCD projector placed perpendicularly to the camera. Different intensity colored level gradients are projected onto the particles to encode the depth position (Z) information of each particle, benefiting from the possibility of varying the color profiles and projected frequencies up to 60 Hz. Chromatic aberrations and distortions are estimated and corrected using a 3D laser engraved calibration target. The camera-projector system characterization is presented considering size and depth position of the particles. The use of these components reduces dramatically the cost and complexity of traditional 3D-PTV systems.
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.
NASA Astrophysics Data System (ADS)
Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh
2016-03-01
We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.
Resolution for color photography
NASA Astrophysics Data System (ADS)
Hubel, Paul M.; Bautsch, Markus
2006-02-01
Although it is well known that luminance resolution is most important, the ability to accurately render colored details, color textures, and colored fabrics cannot be overlooked. This includes the ability to accurately render single-pixel color details as well as avoiding color aliasing. All consumer digital cameras on the market today record in color and the scenes people are photographing are usually color. Yet almost all resolution measurements made on color cameras are done using a black and white target. In this paper we present several methods for measuring and quantifying color resolution. The first method, detailed in a previous publication, uses a slanted-edge target of two colored surfaces in place of the standard black and white edge pattern. The second method employs the standard black and white targets recommended in the ISO standard, but records these onto the camera through colored filters thus giving modulation between black and one particular color component; red, green, and blue color separation filters are used in this study. The third method, conducted at Stiftung Warentest, an independent consumer organization of Germany, uses a whitelight interferometer to generate fringe pattern targets of varying color and spatial frequency.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Application of multispectral color photography to flame flow visualization
NASA Technical Reports Server (NTRS)
Stoffers, G.
1979-01-01
For flames of short duration and low intensity of radiation a spectroscopical flame diagnostics is difficult. In order to find some other means of extracting information about the flame structure from its radiation, the feasibility of using multispectral color photography was successfully evaluated. Since the flame photographs are close-ups, there is a considerable parallax between the single images, when several cameras are used, and additive color viewing is not possible. Each image must be analyzed individually, it is advisable to use color film in all cameras. One can either use color films of different spectral sensitivities or color films of the same type with different color filters. Sharp cutting filters are recommended.
A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.
2009-01-01
The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
NASA Astrophysics Data System (ADS)
Gomes, Gary G.
1986-05-01
A cost effective and supportable color visual system has been developed to provide the necessary visual cues to United States Air Force B-52 bomber pilots training to become proficient at the task of inflight refueling. This camera model visual system approach is not suitable for all simulation applications, but provides a cost effective alternative to digital image generation systems when high fidelity of a single movable object is required. The system consists of a three axis gimballed KC-l35 tanker model, a range carriage mounted color augmented monochrome television camera, interface electronics, a color light valve projector and an infinity optics display system.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Time-dynamics of the two-color emission from vertical-external-cavity surface-emitting lasers
NASA Astrophysics Data System (ADS)
Chernikov, A.; Wichmann, M.; Shakfa, M. K.; Scheller, M.; Moloney, J. V.; Koch, S. W.; Koch, M.
2012-01-01
The temporal stability of a two-color vertical-external-cavity surface-emitting laser is studied using single-shot streak-camera measurements. The collected data is evaluated via quantitative statistical analysis schemes. Dynamically stable and unstable regions for the two-color operation are identified and the dependence on the pump conditions is analyzed.
Improving the color fidelity of cameras for advanced television systems
NASA Astrophysics Data System (ADS)
Kollarits, Richard V.; Gibbon, David C.
1992-08-01
In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
NASA Astrophysics Data System (ADS)
Quan, Shuxue
2009-02-01
Bayer patterns, in which a single value of red, green or blue is available for each pixel, are widely used in digital color cameras. The reconstruction of the full color image is often referred to as demosaicking. This paper introduced a new approach - morphological demosaicking. The approach is based on strong edge directionality selection and interpolation, followed by morphological operations to refine edge directionality selection and reduce color aliasing. Finally performance evaluation and examples of color artifacts reduction are shown.
2017-07-13
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 59750 Latitude: -10.5452 Longitude: 290.307 Instrument: VIS Captured: 2015-06-03 12:33 https://photojournal.jpl.nasa.gov/catalog/PIA21705
2015-08-21
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Melas Chasma. Orbit Number: 10289 Latitude: -9.9472 Longitude: 285.933 Instrument: VIS Captured: 2004-04-09 12:43 http://photojournal.jpl.nasa.gov/catalog/PIA19756
Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji
2006-02-01
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
2016-10-11
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows dust devil tracks (dark blue linear feature) in Terra Cimmeria. Orbit Number: 43463 Latitude: -53.1551 Longitude: 125.069 Instrument: VIS Captured: 2011-10-01 23:55 http://photojournal.jpl.nasa.gov/catalog/PIA21009
2017-06-01
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Russell Crater in Noachis Terra. Orbit Number: 59591 Latitude: -54.471 Longitude: 13.1288 Instrument: VIS Captured: 2015-05-21 10:57 https://photojournal.jpl.nasa.gov/catalog/PIA21674
Sensor fusion and augmented reality with the SAFIRE system
NASA Astrophysics Data System (ADS)
Saponaro, Philip; Treible, Wayne; Phelan, Brian; Sherbondy, Kelly; Kambhamettu, Chandra
2018-04-01
The Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) mobile radar system was developed and exercised at an arid U.S. test site. The system can detect hidden target using radar, a global positioning system (GPS), dual stereo color cameras, and dual stereo thermal cameras. An Augmented Reality (AR) software interface allows the user to see a single fused video stream containing the SAR, color, and thermal imagery. The stereo sensors allow the AR system to display both fused 2D imagery and 3D metric reconstructions, where the user can "fly" around the 3D model and switch between the modalities.
Russell Crater Dunes - False Color
2017-07-07
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the large dune form on the floor of Russell Crater. Orbit Number: 59672 Latitude: -54.337 Longitude: 13.1087 Instrument: VIS Captured: 2015-05-28 02:39 https://photojournal.jpl.nasa.gov/catalog/PIA21701
2015-10-08
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of the floor of Melas Chasma. The dark blue region in this false color image is sand dunes. Orbit Number: 12061 Latitude: -12.2215 Longitude: 289.105 Instrument: VIS Captured: 2004-09-02 10:11 http://photojournal.jpl.nasa.gov/catalog/PIA19793
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
Compact camera technologies for real-time false-color imaging in the SWIR band
NASA Astrophysics Data System (ADS)
Dougherty, John; Jennings, Todd; Snikkers, Marco
2013-11-01
Previously real-time false-colored multispectral imaging was not available in a true snapshot single compact imager. Recent technology improvements now allow for this technique to be used in practical applications. This paper will cover those advancements as well as a case study for its use in UAV's where the technology is enabling new remote sensing methodologies.
Sedgewick, Gerald J.; Ericson, Marna
2015-01-01
Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568
2017-02-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Gale Crater. Basaltic sands are dark blue in this type of false color combination. The Curiosity Rover is located in another portion of Gale Crater, far southwest of this image. Orbit Number: 51803 Latitude: -4.39948 Longitude: 138.116 Instrument: VIS Captured: 2013-08-18 09:04 http://photojournal.jpl.nasa.gov/catalog/PIA21312
Electrically actuatable temporal tristimulus-color device
Koehler, Dale R.
1992-01-01
The electrically actuated light filter operates in a cyclical temporal mode to effect a tristimulus-color light analyzer. Construction is based on a Fabry-Perot interferometer comprised of a high-speed movable mirror pair and cyclically powered electrical actuators. When combined with a single vidicon tube or a monochrome solid state image sensor, a temporally operated tristimulus-color video camera is effected. A color-generated is accomplished when constructed with a companion light source and is a flicker-free colored-light source for transmission type display systems. Advantages of low cost and small physical size result from photolithographic batch-processing manufacturability.
DeRocco, Vanessa; Anderson, Trevor; Piehler, Jacob; Erie, Dorothy A; Weninger, Keith
2010-11-01
To enable studies of conformational changes within multimolecular complexes, we present a simultaneous, four-color single molecule fluorescence methodology implemented with total internal reflection illumination and camera-based, wide-field detection. We further demonstrate labeling histidine-tagged proteins noncovalently with Tris-nitrilotriacetic acid (Tris-NTA)-conjugated dyes to achieve single molecule detection. We combine these methods to colocalize the mismatch repair protein MutSα on DNA while monitoring MutSα-induced DNA bending using Förster resonance energy transfer (FRET) and to monitor assembly of membrane-tethered SNARE protein complexes.
DeRocco, Vanessa C.; Anderson, Trevor; Piehler, Jacob; Erie, Dorothy A.; Weninger, Keith
2010-01-01
To allow studies of conformational changes within multi-molecular complexes, we present a simultaneous, 4-color single molecule fluorescence methodology implemented with total internal reflection illumination and camera based, wide-field detection. We further demonstrate labeling histidine-tagged proteins non-covalently with tris-Nitrilotriacetic acid (tris-NTA) conjugated dyes to achieve single molecule detection. We combine these methods to co-localize the mismatch repair protein MutSα on DNA while monitoring MutSα-induced DNA bending using Förster resonance energy transfer (FRET) and to monitor assembly of membrane-tethered SNARE protein complexes. PMID:21091445
NASA Astrophysics Data System (ADS)
Kanamori, Katsuhiro
2016-07-01
An endoscopic image processing technique for enhancing the appearance of microstructures on translucent mucosae is described. This technique employs two pairs of co- and cross-polarization images under two different linearly polarized lights, from which the averaged subtracted polarization image (AVSPI) is calculated. Experiments were then conducted using an acrylic phantom and excised porcine stomach tissue using a manual experimental setup with ring-type lighting, two rotating polarizers, and a color camera; better results were achieved with the proposed method than with conventional color intensity image processing. An objective evaluation method that uses texture analysis was developed and used to evaluate the enhanced microstructure images. This paper introduces two types of online, rigid-type, polarimetric endoscopic implementations using a polarized ring-shaped LED and a polarimetric camera. The first type uses a beam-splitter-type color polarimetric camera, and the second uses a single-chip monochrome polarimetric camera. Microstructures on the mucosa surface were enhanced robustly with these online endoscopes regardless of the difference in the extinction ratio of each device. These results show that polarimetric endoscopy using AVSPI is both effective and practical for hardware implementation.
Multi-pulse shadowgraphic RGB illumination and detection for flow tracking
NASA Astrophysics Data System (ADS)
Menser, Jan; Schneider, Florian; Dreier, Thomas; Kaiser, Sebastian A.
2018-06-01
This work demonstrates the application of a multi-color LED and a consumer color camera for visualizing phase boundaries in two-phase flows, in particular for particle tracking velocimetry. The LED emits a sequence of short light pulses, red, green, then blue (RGB), and through its color-filter array, the camera captures all three pulses on a single RGB frame. In a backlit configuration, liquid droplets appear as shadows in each color channel. Color reversal and color cross-talk correction yield a series of three frozen-flow images that can be used for further analysis, e.g., determining the droplet velocity by particle tracking. Three example flows are presented, solid particles suspended in water, the penetrating front of a gasoline direct-injection spray, and the liquid break-up region of an "air-assisted" nozzle. Because of the shadowgraphic arrangement, long path lengths through scattering media lower image contrast, while visualization of phase boundaries with high resolution is a strength of this method. Apart from a pulse-and-delay generator, the overall system cost is very low.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
Spectral colors capture and reproduction based on digital camera
NASA Astrophysics Data System (ADS)
Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang
2018-01-01
The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.
Image quality evaluation of color displays using a Fovean color camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro
2007-03-01
This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC).
Phillips, Zachary F; Chen, Michael; Waller, Laura
2017-01-01
We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification-an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC), is a single-shot variant of Differential Phase Contrast (DPC), which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps) for various in vitro cell samples and c. elegans in a micro-fluidic channel.
A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera
NASA Astrophysics Data System (ADS)
Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin
2014-12-01
The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.
Single Lens Dual-Aperture 3D Imaging System: Color Modeling
NASA Technical Reports Server (NTRS)
Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael
2012-01-01
In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp
Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signalsmore » for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors’ method based on the use of a commercially available color camera is useful to evaluate and understand the display performances of both monochrome and color LCDs in radiology departments.« less
Spectrally resolved laser interference microscopy
NASA Astrophysics Data System (ADS)
Butola, Ankit; Ahmad, Azeem; Dubey, Vishesh; Senthilkumaran, P.; Singh Mehta, Dalip
2018-07-01
We developed a new quantitative phase microscopy technique, namely, spectrally resolved laser interference microscopy (SR-LIM), with which it is possible to quantify multi-spectral phase information related to biological specimens without color crosstalk using a color CCD camera. It is a single shot technique where sequential switched on/off of red, green, and blue (RGB) wavelength light sources are not required. The method is implemented using a three-wavelength interference microscope and a customized compact grating based imaging spectrometer fitted at the output port. The results of the USAF resolution chart while employing three different light sources, namely, a halogen lamp, light emitting diodes, and lasers, are discussed and compared. The broadband light sources like the halogen lamp and light emitting diodes lead to stretching in the spectrally decomposed images, whereas it is not observed in the case of narrow-band light sources, i.e. lasers. The proposed technique is further successfully employed for single-shot quantitative phase imaging of human red blood cells at three wavelengths simultaneously without color crosstalk. Using the present technique, one can also use a monochrome camera, even though the experiments are performed using multi-color light sources. Finally, SR-LIM is not only limited to RGB wavelengths, it can be further extended to red, near infra-red, and infra-red wavelengths, which are suitable for various biological applications.
Robust tissue classification for reproducible wound assessment in telemedicine environments
NASA Astrophysics Data System (ADS)
Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves
2010-04-01
In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.
Water Detection Based on Color Variation
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.
2012-01-01
This software has been designed to detect water bodies that are out in the open on cross-country terrain at close range (out to 30 meters), using imagery acquired from a stereo pair of color cameras mounted on a terrestrial, unmanned ground vehicle (UGV). This detector exploits the fact that the color variation across water bodies is generally larger and more uniform than that of other naturally occurring types of terrain, such as soil and vegetation. Non-traversable water bodies, such as large puddles, ponds, and lakes, are detected based on color variation, image intensity variance, image intensity gradient, size, and shape. At ranges beyond 20 meters, water bodies out in the open can be indirectly detected by detecting reflections of the sky below the horizon in color imagery. But at closer range, the color coming out of a water body dominates sky reflections, and the water cue from sky reflections is of marginal use. Since there may be times during UGV autonomous navigation when a water body does not come into a perception system s field of view until it is at close range, the ability to detect water bodies at close range is critical. Factors that influence the perceived color of a water body at close range are the amount and type of sediment in the water, the water s depth, and the angle of incidence to the water body. Developing a single model of the mixture ratio of light reflected off the water surface (to the camera) to light coming out of the water body (to the camera) for all water bodies would be fairly difficult. Instead, this software detects close water bodies based on local terrain features and the natural, uniform change in color that occurs across the surface from the leading edge to the trailing edge.
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC)
2017-01-01
We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification—an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC), is a single-shot variant of Differential Phase Contrast (DPC), which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps) for various in vitro cell samples and c. elegans in a micro-fluidic channel. PMID:28152023
Calibration View of Earth and the Moon by Mars Color Imager
NASA Technical Reports Server (NTRS)
2005-01-01
Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils. The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results. The Earth and Moon images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to the Moon was about 1,440,000 kilometers (about 895,000 miles); the range to Earth was about 1,170,000 kilometers (about 727,000 miles). This view combines a sequence of frames showing the passage of Earth and the Moon across the field of view of a single color band of the Mars Color Imager. As the spacecraft slewed to view the two objects, they passed through the camera's field of view. Earth has been saturated white in this image so that both Earth and the Moon can be seen in the same frame. The Sun was coming from the left, so Earth and the Moon are seen in a quarter phase. Earth is on the left. The Moon appears briefly on the right. The Moon fades in and out; the Moon is only one pixel in size, and its fading is an artifact of the size and configuration of the light-sensitive pixels of the camera's charge-coupled device (CCD) detector.Quantitative Imaging with a Mobile Phone Microscope
Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.
2014-01-01
Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072
Three-dimensional particle tracking via tunable color-encoded multiplexing.
Duocastella, Martí; Theriault, Christian; Arnold, Craig B
2016-03-01
We present a novel 3D tracking approach capable of locating single particles with nanometric precision over wide axial ranges. Our method uses a fast acousto-optic liquid lens implemented in a bright field microscope to multiplex light based on color into different and selectable focal planes. By separating the red, green, and blue channels from an image captured with a color camera, information from up to three focal planes can be retrieved. Multiplane information from the particle diffraction rings enables precisely locating and tracking individual objects up to an axial range about 5 times larger than conventional single-plane approaches. We apply our method to the 3D visualization of the well-known coffee-stain phenomenon in evaporating water droplets.
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Dichotomous Results Using Polarized Illumination with Single Chip Color Cameras
2013-01-01
response is both strain and chemically induced at an interior laminate layer interface. The size and location of the pattern are crucial and not the...the ideal for making photoelastic stress measurements, which were not required for this sample. ...............7 Figure 8. A single laminate as seen...7 Figure 9. The observed response was isolated to a single layer of the laminate structure. The analyzer is in front of the base
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
Imaging Detonations of Explosives
2016-04-01
made using a full-color single-camera pyrometer where wavelength resolution is achieved using the Bayer-type mask covering the sensor chip17 and a...many CHNO- based explosives (e.g., TNT [C7H5N3O6], the formulation C-4 [92% RDX, C3H6N6O6]), hot detonation products are mainly soot and permanent...unreferenced). Essentially, 2 light sensors (cameras), each filtered over a narrow wavelength region, observe an event over the same line of sight. The
A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018
A coded structured light system based on primary color stripe projection and monochrome imaging.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-10-14
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.
Multi-color pyrometry imaging system and method of operating the same
Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde
2017-03-21
A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.
Compact fluorescence and white-light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; tan Hehir, Cristina
2012-02-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A compact fluorescence and white light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; Tan Hehir, Cristina
2012-03-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
Stokes image reconstruction for two-color microgrid polarization imaging systems.
Lemaster, Daniel A
2011-07-18
The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.
NASA Astrophysics Data System (ADS)
Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem
2016-03-01
We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.
Full-color OLED on silicon microdisplay
NASA Astrophysics Data System (ADS)
Ghosh, Amalkumar P.
2002-02-01
eMagin has developed numerous enhancements to organic light emitting diode (OLED) technology, including a unique, up- emitting structure for OLED-on-silicon microdisplay devices. Recently, eMagin has fabricated full color SVGA+ resolution OLED microdisplays on silicon, with over 1.5 million color elements. The display is based on white light emission from OLED followed by LCD-type red, green and blue color filters. The color filters are patterned directly on OLED devices following suitable thin film encapsulation and the drive circuits are built directly on single crystal silicon. The resultant color OLED technology, with hits high efficiency, high brightness, and low power consumption, is ideally suited for near to the eye applications such as wearable PCS, wireless Internet applications and mobile phone, portable DVD viewers, digital cameras and other emerging applications.
BOREAS Level-0 C-130 Aerial Photography
NASA Technical Reports Server (NTRS)
Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)
2000-01-01
For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.
Use of a color CMOS camera as a colorimeter
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Redford, Gary R.
2006-08-01
In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.
Filters for Color Imaging and for Science
2013-03-18
The color cameras on NASA Mars rover Curiosity, including the pair that make up the rover Mastcam instrument, use the same type of Bayer pattern RGB filter as found in typical commercial color cameras.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems
D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman
1998-01-01
The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...
NASA Technical Reports Server (NTRS)
2004-01-01
The color image on the lower left from the panoramic camera on the Mars Exploration Rover Opportunity shows the 'Lily Pad' bounce-mark area at Meridiani Planum, Mars. This image was acquired on the 3rd sol, or martian day, of Opportunity's mission (Jan.26, 2004). The upper left image is a monochrome (single filter) image from the rover's panoramic camera, showing regions from which spectra were extracted from the 'Lily Pad' area. As noted by the line graph on the right, the green spectra is from the undisturbed surface and the red spectra is from the airbag bounce mark.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Video systems for real-time oil-spill detection
NASA Technical Reports Server (NTRS)
Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.
1973-01-01
Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
Measurement of soil color: a comparison between smartphone camera and the Munsell color charts
USDA-ARS?s Scientific Manuscript database
Soil color is one of the most valuable soil properties for assessing and monitoring soil health. Here we present the results of tests of a new soil color app for mobile phones. The comparisons include various smartphones cameras under different natural illumination conditions (sunny and cloudy) and ...
NASA Astrophysics Data System (ADS)
Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.
2009-12-01
The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.
Analysis of crystalline lens coloration using a black and white charge-coupled device camera.
Sakamoto, Y; Sasaki, K; Kojima, M
1994-01-01
To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.
Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.
2014-01-01
Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030
Theoretical colours and isochrones for some Hubble Space Telescope colour systems. II
NASA Technical Reports Server (NTRS)
Paltoglou, G.; Bell, R. A.
1991-01-01
A grid of synthetic surface brightness magnitudes for 14 bandpasses of the Hubble Space Telescope Faint Object Camera is presented, as well as a grid of UBV, uvby, and Faint Object Camera surface brightness magnitudes derived from the Gunn-Stryker spectrophotometric atlas. The synthetic colors are used to examine the transformations between the ground-based Johnson UBV and Stromgren uvby systems and the Faint Object Camera UBV and uvby. Two new four-color systems, similar to the Stromgren system, are proposed for the determination of abundance, temperature, and surface gravity. The synthetic colors are also used to calculate color-magnitude isochrones from the list of theoretical tracks provided by VandenBerg and Bell (1990). It is shown that by using the appropriate filters it is possible to minimize the dependence of this color difference on metallicity. The effects of interstellar reddening on various Faint Object Camera colors are analyzed as well as the observational requirements for obtaining data of a given signal-to-noise for each of the 14 bandpasses.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Color calibration of an RGB camera mounted in front of a microscope with strong color distortion.
Charrière, Renée; Hébert, Mathieu; Trémeau, Alain; Destouches, Nathalie
2013-07-20
This paper aims at showing that performing color calibration of an RGB camera can be achieved even in the case where the optical system before the camera introduces strong color distortion. In the present case, the optical system is a microscope containing a halogen lamp, with a nonuniform irradiance on the viewed surface. The calibration method proposed in this work is based on an existing method, but it is preceded by a three-step preprocessing of the RGB images aiming at extracting relevant color information from the strongly distorted images, taking especially into account the nonuniform irradiance map and the perturbing texture due to the surface topology of the standard color calibration charts when observed at micrometric scale. The proposed color calibration process consists first in computing the average color of the color-chart patches viewed under the microscope; then computing white balance, gamma correction, and saturation enhancement; and finally applying a third-order polynomial regression color calibration transform. Despite the nonusual conditions for color calibration, fairly good performance is achieved from a 48 patch Lambertian color chart, since an average CIE-94 color difference on the color-chart colors lower than 2.5 units is obtained.
Low-cost camera modifications and methodologies for very-high-resolution digital images
USDA-ARS?s Scientific Manuscript database
Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...
Calibration Image of Earth by Mars Color Imager
NASA Technical Reports Server (NTRS)
2005-01-01
Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils. The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results. The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles). This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data from the camera were being transmitted in real time to the Deep Space Network antennas in Goldstone, California.Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Leeuw, Thomas; Boss, Emmanuel
2018-01-16
HydroColor is a mobile application that utilizes a smartphone's camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone's digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor's reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data.
Chromatic Modulator for a High-Resolution CCD or APS
NASA Technical Reports Server (NTRS)
Hartley, Frank; Hull, Anthony
2008-01-01
A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.
Transient full-field vibration measurement using spectroscopical stereo photogrammetry.
Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan
2010-12-20
Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Leeuw, Thomas; Boss, Emmanuel
2018-01-01
HydroColor is a mobile application that utilizes a smartphone’s camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone’s digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor’s reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data. PMID:29337917
Measurement of luminance noise and chromaticity noise of LCDs with a colorimeter and a color camera
NASA Astrophysics Data System (ADS)
Roehrig, H.; Dallas, W. J.; Krupinski, E. A.; Redford, Gary R.
2007-09-01
This communication focuses on physical evaluation of image quality of displays for applications in medical imaging. In particular we were interested in luminance noise as well as chromaticity noise of LCDs. Luminance noise has been encountered in the study of monochrome LCDs for some time, but chromaticity noise is a new type of noise which has been encountered first when monochrome and color LCDs were compared in an ROC study. In this present study one color and one monochrome 3 M-pixel LCDs were studied. Both were DICOM calibrated with equal dynamic range. We used a Konica Minolta Chroma Meter CS-200 as well as a Foveon color camera to estimate luminance and chrominance variations of the displays. We also used a simulation experiment to estimate luminance noise. The measurements with the colorimeter were consistent. The measurements with the Foveon color camera were very preliminary as color cameras had never been used for image quality measurements. However they were extremely promising. The measurements with the colorimeter and the simulation results showed that the luminance and chromaticity noise of the color LCD were larger than that of the monochrome LCD. Under the condition that an adequate calibration method and image QA/QC program for color displays are available, we expect color LCDs may be ready for radiology in very near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fraser, Wesley C.; Brown, Michael E.; Glass, Florian, E-mail: wesley.fraser@nrc.ca
2015-05-01
Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlatedmore » optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes.« less
Color film spectral properties test experiment for target simulation
NASA Astrophysics Data System (ADS)
Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji
2017-04-01
In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.
NASA Astrophysics Data System (ADS)
Bachche, Shivaji; Oka, Koichi
2013-06-01
This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.
Supercontinuum as a light source for miniaturized endoscopes.
Lu, M K; Lin, H Y; Hsieh, C C; Kao, F J
2016-09-01
In this work, we have successfully implemented supercontinuum based illumination through single fiber coupling. The integration of a single fiber illumination with a miniature CMOS sensor forms a very slim and powerful camera module for endoscopic imaging. A set of tests and in vivo animal experiments are conducted accordingly to characterize the corresponding illuminance, spectral profile, intensity distribution, and image quality. The key illumination parameters of the supercontinuum, including color rendering index (CRI: 72%~97%) and correlated color temperature (CCT: 3,100K~5,200K), are modified with external filters and compared with those from a LED light source (CRI~76% & CCT~6,500K). The very high spatial coherence of the supercontinuum allows high luminosity conduction through a single multimode fiber (core size~400μm), whose distal end tip is attached with a diffussion tip to broaden the solid angle of illumination (from less than 10° to more than 80°).
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.
1990-10-01
Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.
Perceptual Color Characterization of Cameras
Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo
2014-01-01
Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586
A novel dual-color bifocal imaging system for single-molecule studies.
Jiang, Chang; Kaul, Neha; Campbell, Jenna; Meyhofer, Edgar
2017-05-01
In this paper, we report the design and implementation of a dual-color bifocal imaging (DBI) system that is capable of acquiring two spectrally distinct, spatially registered images of objects located in either same or two distinct focal planes. We achieve this by separating an image into two channels with distinct chromatic properties and independently focusing both images onto a single CCD camera. The two channels in our device are registered with subpixel accuracy, and long-term stability of the registered images with nanometer-precision was accomplished by reducing the drift of the images to ∼5 nm. We demonstrate the capabilities of our DBI system by imaging biomolecules labeled with spectrally distinct dyes and micro- and nano-sized spheres located in different focal planes.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
NASA Technical Reports Server (NTRS)
Clegg, R. H.; Scherz, J. P.
1975-01-01
Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.
2006-01-27
The leading hemisphere of Dione displays subtle variations in color across its surface in this false color view. To create this view, ultraviolet, green and infrared images were combined into a single black and white picture that isolates and maps regional color differences. This "color map" was then superposed over a clear-filter image. The origin of the color differences is not yet understood, but may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil. Terrain visible here is on the moon's leading hemisphere. North on Dione (1,126 kilometers, or 700 miles across) is up and rotated 17 degrees to the right. All images were acquired with the Cassini spacecraft narrow-angle camera on Dec. 24, 2005 at a distance of approximately 597,000 kilometers (371,000 miles) from Dione and at a Sun-Dione-spacecraft, or phase, angle of 21 degrees. Image scale is 4 kilometers (2 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA07688
Formulation of image quality prediction criteria for the Viking lander camera
NASA Technical Reports Server (NTRS)
Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.
1973-01-01
Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.
False-Color Image of an Impact Crater on Vesta
2011-08-24
NASA Dawn spacecraft obtained this false-color image right of an impact crater in asteroid Vesta equatorial region with its framing camera on July 25, 2011. The view on the left is from the camera clear filter.
Procurement specification color graphic camera system
NASA Technical Reports Server (NTRS)
Prow, G. E.
1980-01-01
The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
Combining color and shape information for illumination-viewpoint invariant object recognition.
Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis
2006-01-01
In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.
Advanced High-Definition Video Cameras
NASA Technical Reports Server (NTRS)
Glenn, William
2007-01-01
A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.
Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Phillips, E.; Messina, P.
2003-12-01
Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.
High performance gel imaging with a commercial single lens reflex camera
NASA Astrophysics Data System (ADS)
Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.
2011-03-01
A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
NASA Astrophysics Data System (ADS)
Dietrich, Volker; Hartmann, Peter; Kerz, Franca
2015-03-01
Digital cameras are present everywhere in our daily life. Science, business or private life cannot be imagined without digital images. The quality of an image is often rated by its color rendering. In order to obtain a correct color recognition, a near infrared cut (IRC-) filter must be used to alter the sensitivity of imaging sensor. Increasing requirements related to color balance and larger angle of incidence (AOI) enforced the use of new materials as the e.g. BG6X series which substitutes interference coated filters on D263 thin glass. Although the optical properties are the major design criteria, devices have to withstand numerous environmental conditions during use and manufacturing - as e.g. temperature change, humidity, and mechanical shock, as wells as mechanical stress. The new materials show different behavior with respect to all these aspects. They are usually more sensitive against these requirements to a larger or smaller extent. Mechanical strength is especially different. Reliable strength data are of major interest for mobile phone camera applications. As bending strength of a glass component depends not only upon the material itself, but mainly on the surface treatment and test conditions, a single number for the strength might be misleading if the conditions of the test and the samples are not described precisely,. Therefore, Schott started investigations upon the bending strength data of various IRC-filter materials. Different test methods were used to obtain statistical relevant data.
Improved Calibration Shows Images True Colors
NASA Technical Reports Server (NTRS)
2015-01-01
Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.
Development of a vision-based pH reading system
NASA Astrophysics Data System (ADS)
Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon
2015-10-01
pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
NPS assessment of color medical displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-02-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.
NPS assessment of color medical image displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired
Color filter array design based on a human visual model
NASA Astrophysics Data System (ADS)
Parmar, Manu; Reeves, Stanley J.
2004-05-01
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.
Single camera photogrammetry system for EEG electrode identification and localization.
Baysal, Uğur; Sengül, Gökhan
2010-04-01
In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.
Development of digital shade guides for color assessment using a digital camera with ring flashes.
Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan
2011-02-01
Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
NASA Technical Reports Server (NTRS)
Ridd, M. K.
1984-01-01
Twenty-three missions were flown using the EPA's panoramic camera to obtain color and color infrared photographs of landslide and flood damage in Utah. From the state's point of view, there were many successes. The biggest single obstacle to smooth and continued performance was unavailable aircraft. The Memorandum of Understanding between the State of Utah, the Environmental Protection Agency, and the Center for Remote Sensing and Cartography is included along with forms for planning enviropod missions, for requesting flights, and for obtaining feedback from participating agencies.
Temperature measurement with industrial color camera devices
NASA Astrophysics Data System (ADS)
Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen
1999-05-01
This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.
Full-color stereoscopic single-pixel camera based on DMD technology
NASA Astrophysics Data System (ADS)
Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús
2017-02-01
Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.
Characterization of flotation color by machine vision
NASA Astrophysics Data System (ADS)
Siren, Ari
1999-09-01
Flotation is the most common industrial method by which valuable minerals are separated from waste rock after crushing and grinding the ore. For process control, flotation plants and devices are equipped with conventional and specialized sensors. However, certain variables are left to the visual observation of the operator, such as the color of the froth and the size of the bubbles in the froth. The ChaCo-Project (EU-Project 24931) was launched in November 1997. In this project a measuring station was built at the Pyhasalmi flotation plant. The system includes an RGB camera and a spectral color measuring instrument for the color inspection of the flotation. The RGB camera or visible spectral range is also measured to compare the operators' comments on the color of the froth relating to the sphalerite concentration and the process balance. Different dried mineral (sphalerite) ratios were studied with iron pyrite to find out about the minerals' typical spectral features. The correlation between sphalerite spectral reflectance and sphalerite concentration over various wavelengths are used to select the proper camera system with filters or to compare the results with the color information from the RGB camera. Various machine vision candidate techniques are discussed for this application and the preprocessed information of the dried mineral colors is used and adapted to the online measuring station. Moving froth bubbles produce total reflections, disturbing the color information. Polarization filters are used and the results are reported. Also the reflectance outside the visible light is studied and reported.
Recent developments in space shuttle remote sensing, using hand-held film cameras
NASA Technical Reports Server (NTRS)
Amsbury, David L.; Bremer, Jeffrey M.
1992-01-01
The authors report on the advantages and disadvantages of a number of camera systems which are currently employed for space shuttle remote sensing operations. Systems discussed include the modified Hasselbad, the Rolleiflex 6008, the Linkof 5-inch format system, and the Nikon F3/F4 systems. Film/filter combinations (color positive films, color infrared films, color negative films and polarization filters) are presented.
Contrast enhancement of bite mark images using the grayscale mixer in ACR in Photoshop®.
Evans, Sam; Noorbhai, Suzanne; Lawson, Zoe; Stacey-Jones, Seren; Carabott, Romina
2013-05-01
Enhanced images may improve bite mark edge definition, assisting forensic analysis. Current contrast enhancement involves color extraction, viewing layered images by channel. A novel technique, producing a single enhanced image using the grayscale mix panel within Adobe Camera Raw®, has been developed and assessed here, allowing adjustments of multiple color channels simultaneously. Stage 1 measured RGB values in 72 versions of a color chart image; eight sliders in Photoshop® were adjusted at 25% intervals, all corresponding colors affected. Stage 2 used a bite mark image, and found only red, orange, and yellow sliders had discernable effects. Stage 3 assessed modality preference between color, grayscale, and enhanced images; on average, the 22 survey participants chose the enhanced image as better defined for nine out of 10 bite marks. The study has shown potential benefits for this new technique. However, further research is needed before use in the analysis of bite marks. © 2013 American Academy of Forensic Sciences.
Dai, Meiling; Yang, Fujun; He, Xiaoyuan
2012-04-20
A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Martínez-González, A.; Moreno-Hernández, D.; Monzón-Hernández, D.; León-Rodríguez, M.
2017-06-01
In the schlieren method, the deflection of light by the presence of an inhomogeneous medium is proportional to the gradient of its refractive index. Such deflection, in a schlieren system, is represented by light intensity variations on the observation plane. Then, for a digital camera, the intensity level registered by each pixel depends mainly on the variation of the medium refractive index and the status of the digital camera settings. Therefore, in this study, we regulate the intensity value of each pixel by controlling the camera settings such as exposure time, gamma and gain values in order to calibrate the image obtained to the actual temperature values of a particular medium. In our approach, we use a color digital camera. The images obtained with a color digital camera can be separated on three different color-channels. Each channel corresponds to red, green, and blue color, moreover, each one has its own sensitivity. The differences in sensitivity allow us to obtain a range of temperature values for each color channel. Thus, high, medium and low sensitivity correspond to green, blue, and red color channel respectively. Therefore, by adding up the temperature contribution of each color channel we obtain a wide range of temperature values. Hence, the basic idea in our approach to measure temperature, using a schlieren system, is to relate the intensity level of each pixel in a schlieren image to the corresponding knife-edge position measured at the exit focal plane of the system. Our approach was applied to the measurement of instantaneous temperature fields of the air convection caused by a heated rectangular metal plate and a candle flame. We found that for the metal plate temperature measurements only the green and blue color-channels were required to sense the entire phenomena. On the other hand, for the candle case, the three color-channels were needed to obtain a complete measurement of temperature. In our study, the candle temperature was took as reference and it was found that the maximum temperature value obtained for green, blue and red color-channel was ∼275.6, ∼412.9, and ∼501.3 °C, respectively.
Performance evaluation of a quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Burcher, E. E.; Huck, F. O.; Wall, S. D.; Woehrle, S. B.
1977-01-01
Spatial resolutions achieved with cameras on lunar and planetary landers have been limited to about 1 mm, whereas microscopes of the type proposed for such landers could have obtained resolutions of about 1 um but were never accepted because of their complexity and weight. The quasi-microscope evaluated in this paper could provide intermediate resolutions of about 10 um with relatively simple optics that would augment a camera, such as the Viking lander camera, without imposing special design requirements on the camera of limiting its field of view of the terrain. Images of natural particulate samples taken in black and white and in color show that grain size, shape, and texture are made visible for unconsolidated materials in a 50- to 500-um size range. Such information may provide broad outlines of planetary surface mineralogy and allow inferences to be made of grain origin and evolution. The mineralogical descriptions of single grains would be aided by the reflectance spectra that could, for example, be estimated from the six-channel multispectral data of the Viking lander camera.
NASA Astrophysics Data System (ADS)
Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.
2013-09-01
A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.
Simultaneous Multi-Filter Optical Photometry of GEO Debris
NASA Technical Reports Server (NTRS)
Seitzer, Patrick; Cowardin, Heather; Barker, Edwin S.; Abercromby, Kira; Kelecy, Thomas
2011-01-01
Information on the physical characteristics of unresolved pieces of debris comes from an object's brightness, and how it changes with time and wavelength. True colors of tumbling, irregularly shaped objects can be accurately determined only if the intensity at all wavelengths is measured at the same time. In this paper we report on simultaneous photometric observations of objects at geosynchronous orbit (GEO) using two telescopes at Cerro Tololo Inter-American Observatory (CTIO). The CTIO/SMARTS 0.9-m observes in a Johnson B filter, while the 0.6-m MODEST (Michigan Orbital DEbris Survey Telescope) observes in a Cousins R filter. The two CCD cameras are electronically synchronized so that the exposure start time and duration are the same for both telescopes. Thus we obtain the brightness as a function of time in two passbands simultaneously, and can determine the true color of the object at any time. We will report here on such calibrated measurements made on a sample of GEO objects and what is the distribution of the observed B-R colors. In addition, using this data set, we will show what colors would be observed if the observations in different filters were obtained sequentially, as would be the case for conventional imaging observations with a single detector on a single telescope. Finally, we will compare our calibrated colors of GEO debris with colors determined in the laboratory of selected materials actually used in spacecraft construction.
Seeing Earth Through the Eyes of an Astronaut
NASA Technical Reports Server (NTRS)
Dawson, Melissa
2014-01-01
The Human Exploration Science Office within the ARES Directorate has undertaken a new class of handheld camera photographic observations of the Earth as seen from the International Space Station (ISS). For years, astronauts have attempted to describe their experience in space and how they see the Earth roll by below their spacecraft. Thousands of crew photographs have documented natural features as diverse as the dramatic clay colors of the African coastline, the deep blues of the Earth's oceans, or the swirling Aurora Borealis of Australia in the upper atmosphere. Dramatic recent improvements in handheld digital single-lens reflex (DSLR) camera capabilities are now allowing a new field of crew photography: night time-lapse imagery.
Hyperspectral imaging using a color camera and its application for pathogen detection
USDA-ARS?s Scientific Manuscript database
This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...
2016-09-15
NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047
A Closer Look at Telesto False-Color
2006-02-08
These views show surface features and color variation on the Trojan moon Telesto. The smooth surface of this moon suggests that, like Pandora, it is covered with a mantle of fine, dust-sized icy material. The monochrome image was taken in visible light (see PIA07696). To create the false-color view, ultraviolet, green and infrared images were combined into a single black and white picture that isolates and maps regional color differences. This "color map" was then superposed over a clear-filter image. The origin of the color differences is not yet understood, but may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil. Tiny Telesto is a mere 24 kilometers (15 miles) wide. The image was acquired with the Cassini spacecraft narrow-angle camera on Dec. 25, 2005 at a distance of approximately 20,000 kilometers (12,000 miles) from Telesto and at a Sun-Telesto-spacecraft, or phase, angle of 58 degrees. Image scale is 118 meters (387 feet) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA07697
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
Whole surface image reconstruction for machine vision inspection of fruit
NASA Astrophysics Data System (ADS)
Reese, D. Y.; Lefcourt, A. M.; Kim, M. S.; Lo, Y. M.
2007-09-01
Automated imaging systems offer the potential to inspect the quality and safety of fruits and vegetables consumed by the public. Current automated inspection systems allow fruit such as apples to be sorted for quality issues including color and size by looking at a portion of the surface of each fruit. However, to inspect for defects and contamination, the whole surface of each fruit must be imaged. The goal of this project was to develop an effective and economical method for whole surface imaging of apples using mirrors and a single camera. Challenges include mapping the concave stem and calyx regions. To allow the entire surface of an apple to be imaged, apples were suspended or rolled above the mirrors using two parallel music wires. A camera above the apples captured 90 images per sec (640 by 480 pixels). Single or multiple flat or concave mirrors were mounted around the apple in various configurations to maximize surface imaging. Data suggest that the use of two flat mirrors provides inadequate coverage of a fruit but using two parabolic concave mirrors allows the entire surface to be mapped. Parabolic concave mirrors magnify images, which results in greater pixel resolution and reduced distortion. This result suggests that a single camera with two parabolic concave mirrors can be a cost-effective method for whole surface imaging.
Investigating Mars: Russell Crater - False Color
2017-08-11
This image shows the western part of the dune field on the floor of Russell Crater. This is a false color image of Russell crater and it's surroundings. Sand Dunes usually appear "blue" in false color images. Russell Crater is located in Noachis Terra. A spectacular dune ridge and other dune forms on the crater floor have caused extensive imaging. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 59591 Latitude: -54.471 Longitude: 13.1288 Instrument: VIS Captured: 2015-05-21 10:57 https://photojournal.jpl.nasa.gov/catalog/PIA21808
Real-time imaging of methane gas leaks using a single-pixel camera.
Gibson, Graham M; Sun, Baoqing; Edgar, Matthew P; Phillips, David B; Hempler, Nils; Maker, Gareth T; Malcolm, Graeme P A; Padgett, Miles J
2017-02-20
We demonstrate a camera which can image methane gas at video rates, using only a single-pixel detector and structured illumination. The light source is an infrared laser diode operating at 1.651μm tuned to an absorption line of methane gas. The light is structured using an addressable micromirror array to pattern the laser output with a sequence of Hadamard masks. The resulting backscattered light is recorded using a single-pixel InGaAs detector which provides a measure of the correlation between the projected patterns and the gas distribution in the scene. Knowledge of this correlation and the patterns allows an image to be reconstructed of the gas in the scene. For the application of locating gas leaks the frame rate of the camera is of primary importance, which in this case is inversely proportional to the square of the linear resolution. Here we demonstrate gas imaging at ~25 fps while using 256 mask patterns (corresponding to an image resolution of 16×16). To aid the task of locating the source of the gas emission, we overlay an upsampled and smoothed image of the low-resolution gas image onto a high-resolution color image of the scene, recorded using a standard CMOS camera. We demonstrate for an illumination of only 5mW across the field-of-view imaging of a methane gas leak of ~0.2 litres/minute from a distance of ~1 metre.
NASA Astrophysics Data System (ADS)
Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé
2017-01-01
The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.
CCD Astrometry with Robotic Telescopes
NASA Astrophysics Data System (ADS)
AlZaben, Faisal; Li, Dewei; Li, Yongyao; Dennis, Aren Fene, Michael; Boyce, Grady; Boyce, Pat
2016-01-01
CCD images were acquired of three binary star systems: WDS06145+1148, WDS06206+1803, and WDS06224+2640. The astrometric solution, position angle, and separation of each system were calculated with MaximDL v6 and Mira Pro x64 software suites. The results were consistent with historical measurements in the Washington Double Star Catalog. Our analysis found some differences in measurements between single-shot color CCD cameras and traditional monochrome CCDs using a filter wheel.
Express Yourself: Using Color Schemes, Cameras, and Computers
ERIC Educational Resources Information Center
Lott, Debra
2005-01-01
Self-portraiture is a great project to introduce the study of color schemes and Expressionism. Through this drawing project, students learn about identity, digital cameras, and creative art software. The lesson can be introduced with a study of Edvard Munch and Expressionism. Expressionism was an art movement in which the intensity of the artist's…
Full color natural light holographic camera.
Kim, Myung K
2013-04-22
Full-color, three-dimensional images of objects under incoherent illumination are obtained by a digital holography technique. Based on self-interference of two beam-split copies of the object's optical field with differential curvatures, the apparatus consists of a beam-splitter, a few mirrors and lenses, a piezo-actuator, and a color camera. No lasers or other special illuminations are used for recording or reconstruction. Color holographic images of daylight-illuminated outdoor scenes and a halogen lamp-illuminated toy figure are obtained. From a recorded hologram, images can be calculated, or numerically focused, at any distances for viewing.
NASA Astrophysics Data System (ADS)
Mikhalev, Aleksandr; Podlesny, Stepan; Stoeva, Penka
2016-09-01
To study dynamics of the upper atmosphere, we consider results of the night sky photometry, using a color CCD camera and taking into account the night airglow and features of its spectral composition. We use night airglow observations for 2010-2015, which have been obtained at the ISTP SB RAS Geophysical Observatory (52° N, 103° E) by the camera with KODAK KAI-11002 CCD sensor. We estimate the average brightness of the night sky in R, G, B channels of the color camera for eastern Siberia with typical values ranging from ~0.008 to 0.01 erg*cm-2*s-1. Besides, we determine seasonal variations in the night sky luminosities in R, G, B channels of the color camera. In these channels, luminosities decrease in spring, increase in autumn, and have a pronounced summer maximum, which can be explained by scattered light and is associated with the location of the Geophysical Observatory. We consider geophysical phenomena with their optical effects in R, G, B channels of the color camera. For some geophysical phenomena (geomagnetic storms, sudden stratospheric warmings), we demonstrate the possibility of the quantitative relationship between enhanced signals in R and G channels and increases in intensities of discrete 557.7 and 630 nm emissions, which are predominant in the airglow spectrum.
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Digital Earth Watch: Investigating the World with Digital Cameras
NASA Astrophysics Data System (ADS)
Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.
2015-12-01
Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu
Mount Sharp Panorama in Raw Colors
2013-03-15
This mosaic of images from the Mastcam onboard NASA Mars rover Curiosity shows Mount Sharp in raw color. Raw color shows the scene colors as they would look in a typical smart-phone camera photo, before any adjustment.
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
Camera processing with chromatic aberration.
Korneliussen, Jan Tore; Hirakawa, Keigo
2014-10-01
Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.
Hayashida, Tetsuya; Iwasaki, Hiroaki; Masaoka, Kenichiro; Shimizu, Masanori; Yamashita, Takayuki; Iwai, Wataru
2017-06-26
We selected appropriate indices for color rendition and determined their recommended values for ultra-high-definition television (UHDTV) production using white LED lighting. Since the spectral sensitivities of UHDTV cameras can be designed to approximate the ideal spectral sensitivities of UHDTV colorimetry, they have more accurate color reproduction than HDTV cameras, and thus the color-rendering properties of the lighting are critical. Comparing images taken under white LEDs with conventional color rendering indices (R a , R 9-14 ) and recently proposed methods for evaluating color rendition of CQS, TM-30, Q a , and SSI, we found the combination of R a and R 9 appropriate. For white LED lighting, R a ≥ 90 and R 9 ≥ 80 are recommended for UHDTV production.
Real-time color image processing for forensic fiber investigations
NASA Astrophysics Data System (ADS)
Paulsson, Nils
1995-09-01
This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.
NASA Astrophysics Data System (ADS)
Tabuchi, Toru; Yamagata, Shigeki; Tamura, Tetsuo
2003-04-01
There are increasing demands for information to avoid accident in automobile traffic increase. We will discuss that an infrared camera can identify three conditions (dry, aquaplane, frozen) of the road surface. Principles of this method are; 1.We have found 3-color infrared camera can distinguish those conditions using proper data processing 2.The emissivity of the materials on the road surface (conclete, water, ice) differs in three wavelength regions. 3.The sky's temperature is lower than the road's. The emissivity of the road depends on the road surface conditions. Therefore, 3-color infrared camera measure the energy reflected from the sky on the road surface and self radiation of road surface. The road condition can be distinguished by processing the energy pattern measured in three wavelength regions. We were able to collect the experimental results that the emissivity of conclete is differ from water. The infrared camera whose NETD (Noise Equivalent Temperature Difference) at each 3-wavelength is 1.0C or less can distinguish the road conditions by using emissivity difference.
COUGHLAN, JAMES; MANDUCHI, ROBERTO
2009-01-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. PMID:19960101
Coughlan, James; Manduchi, Roberto
2009-06-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.
New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera
NASA Astrophysics Data System (ADS)
Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo
1994-11-01
An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
Facial skin color measurement based on camera colorimetric characterization
NASA Astrophysics Data System (ADS)
Yang, Boquan; Zhou, Changhe; Wang, Shaoqing; Fan, Xin; Li, Chao
2016-10-01
The objective measurement of facial skin color and its variance is of great significance as much information can be obtained from it. In this paper, we developed a new skin color measurement procedure which includes following parts: first, a new skin tone color checker made of pantone Skin Tone Color Checker was designed for camera colorimetric characterization; second, the chromaticity of light source was estimated via a new scene illumination estimation method considering several previous algorithms; third, chromatic adaption was used to convert the input facial image into output facial image which appears taken under canonical light; finally the validity and accuracy of our method was verified by comparing the results obtained by our procedure with these by spectrophotometer.
Joint demosaicking and zooming using moderate spectral correlation and consistent edge map
NASA Astrophysics Data System (ADS)
Zhou, Dengwen; Dong, Weiming; Chen, Wengang
2014-07-01
The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.
HeatWave: the next generation of thermography devices
NASA Astrophysics Data System (ADS)
Moghadam, Peyman; Vidas, Stephen
2014-05-01
Energy sustainability is a major challenge of the 21st century. To reduce environmental impact, changes are required not only on the supply side of the energy chain by introducing renewable energy sources, but also on the demand side by reducing energy usage and improving energy efficiency. Currently, 2D thermal imaging is used for energy auditing, which measures the thermal radiation from the surfaces of objects and represents it as a set of color-mapped images that can be analysed for the purpose of energy efficiency monitoring. A limitation of such a method for energy auditing is that it lacks information on the geometry and location of objects with reference to each other, particularly across separate images. Such a limitation prevents any quantitative analysis to be done, for example, detecting any energy performance changes before and after retrofitting. To address these limitations, we have developed a next generation thermography device called Heat Wave. Heat Wave is a hand-held 3D thermography device that consists of a thermal camera, a range sensor and color camera, and can be used to generate precise 3D model of objects with augmented temperature and visible information. As an operator holding the device smoothly waves it around the objects of interest, Heat Wave can continuously track its own pose in space and integrate new information from the range and thermal and color cameras into a single, and precise 3D multi-modal model. Information from multiple viewpoints can be incorporated together to improve the accuracy, reliability and robustness of the global model. The approach also makes it possible to reduce any systematic errors associated with the estimation of surface temperature from the thermal images.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Namba, Masakazu; Watabe, Toshihisa; Ohtake, Hiroshi; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Nitta, Hiroshi; Hirao, Takashi
2010-01-01
Our group has been developing a new type of image sensor overlaid with three organic photoconductive films, which are individually sensitive to only one of the primary color components (blue (B), green (G), or red (R) light), with the aim of developing a compact, high resolution color camera without any color separation optical systems. In this paper, we firstly revealed the unique characteristics of organic photoconductive films. Only choosing organic materials can tune the photoconductive properties of the film, especially excellent wavelength selectivities which are good enough to divide the incident light into three primary colors. Color separation with vertically stacked organic films was also shown. In addition, the high-resolution of organic photoconductive films sufficient for high-definition television (HDTV) was confirmed in a shooting experiment using a camera tube. Secondly, as a step toward our goal, we fabricated a stacked organic image sensor with G- and R-sensitive organic photoconductive films, each of which had a zinc oxide (ZnO) thin film transistor (TFT) readout circuit, and demonstrated image pickup at a TV frame rate. A color image with a resolution corresponding to the pixel number of the ZnO TFT readout circuit was obtained from the stacked image sensor. These results show the potential for the development of high-resolution prism-less color cameras with stacked organic photoconductive films.
Hand and goods judgment algorithm based on depth information
NASA Astrophysics Data System (ADS)
Li, Mingzhu; Zhang, Jinsong; Yan, Dan; Wang, Qin; Zhang, Ruiqi; Han, Jing
2016-03-01
A tablet computer with a depth camera and a color camera is loaded on a traditional shopping cart. The inside information of the shopping cart is obtained by two cameras. In the shopping cart monitoring field, it is very important for us to determine whether the customer with goods in or out of the shopping cart. This paper establishes a basic framework for judging empty hand, it includes the hand extraction process based on the depth information, process of skin color model building based on WPCA (Weighted Principal Component Analysis), an algorithm for judging handheld products based on motion and skin color information, statistical process. Through this framework, the first step can ensure the integrity of the hand information, and effectively avoids the influence of sleeve and other debris, the second step can accurately extract skin color and eliminate the similar color interference, light has little effect on its results, it has the advantages of fast computation speed and high efficiency, and the third step has the advantage of greatly reducing the noise interference and improving the accuracy.
Makhambet Crater - False Color
2015-01-29
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows Makhambet Crater.
Color Camera for Curiosity Robotic Arm
2010-11-16
The Mars Hand Lens Imager MAHLI camera will fly on NASA Mars Science Laboratory mission, launching in late 2011. This photo of the camera was taken before MAHLI November 2010 installation onto the robotic arm of the mission Mars rover, Curiosity.
A Plenoptic Multi-Color Imaging Pyrometer
NASA Technical Reports Server (NTRS)
Danehy, Paul M.; Hutchins, William D.; Fahringer, Timothy; Thurow, Brian S.
2017-01-01
A three-color pyrometer has been developed based on plenoptic imaging technology. Three bandpass filters placed in front of a camera lens allow separate 2D images to be obtained on a single image sensor at three different and adjustable wavelengths selected by the user. Images were obtained of different black- or grey-bodies including a calibration furnace, a radiation heater, and a luminous sulfur match flame. The images obtained of the calibration furnace and radiation heater were processed to determine 2D temperature distributions. Calibration results in the furnace showed that the instrument can measure temperature with an accuracy and precision of 10 Kelvins between 1100 and 1350 K. Time-resolved 2D temperature measurements of the radiation heater are shown.
Frequency division multiplexed multi-color fluorescence microscope system
NASA Astrophysics Data System (ADS)
Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan
2017-10-01
Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.
2015-01-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Renaudot Crater.
2015-01-12
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Granicus Valles.
2014-12-25
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Candor Labes.
2015-01-08
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.
Schaeberle Crater - False Color
2015-01-26
The THEMIS VIS camera contains 5 filters. The data from different filters can create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the floor of Schaeberle Crater, including small dunes.
2015-01-30
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows windstreaks in Daedalia Planum.
2015-01-02
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Nili Patera.
2014-12-23
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Atlantis Chaos.
2015-01-01
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.
Hargraves Crater - False Color
2015-01-13
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Hargraves Crater.
2014-12-18
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Reull Vallis.
Target recognitions in multiple-camera closed-circuit television using color constancy
NASA Astrophysics Data System (ADS)
Soori, Umair; Yuen, Peter; Han, Ji Wen; Ibrahim, Izzati; Chen, Wentao; Hong, Kan; Merfort, Christian; James, David; Richardson, Mark
2013-04-01
People tracking in crowded scenes from closed-circuit television (CCTV) footage has been a popular and challenging task in computer vision. Due to the limited spatial resolution in the CCTV footage, the color of people's dress may offer an alternative feature for their recognition and tracking. However, there are many factors, such as variable illumination conditions, viewing angles, and camera calibration, that may induce illusive modification of intrinsic color signatures of the target. Our objective is to recognize and track targets in multiple camera views using color as the detection feature, and to understand if a color constancy (CC) approach may help to reduce these color illusions due to illumination and camera artifacts and thereby improve target recognition performance. We have tested a number of CC algorithms using various color descriptors to assess the efficiency of target recognition from a real multicamera Imagery Library for Intelligent Detection Systems (i-LIDS) data set. Various classifiers have been used for target detection, and the figure of merit to assess the efficiency of target recognition is achieved through the area under the receiver operating characteristics (AUROC). We have proposed two modifications of luminance-based CC algorithms: one with a color transfer mechanism and the other using a pixel-wise sigmoid function for an adaptive dynamic range compression, a method termed enhanced luminance reflectance CC (ELRCC). We found that both algorithms improve the efficiency of target recognitions substantially better than that of the raw data without CC treatment, and in some cases the ELRCC improves target tracking by over 100% within the AUROC assessment metric. The performance of the ELRCC has been assessed over 10 selected targets from three different camera views of the i-LIDS footage, and the averaged target recognition efficiency over all these targets is found to be improved by about 54% in AUROC after the data are processed by the proposed ELRCC algorithm. This amount of improvement represents a reduction of probability of false alarm by about a factor of 5 at the probability of detection of 0.5. Our study concerns mainly the detection of colored targets; and issues for the recognition of white or gray targets will be addressed in a forthcoming study.
The MVACS Surface Stereo Imager on Mars Polar Lander
NASA Astrophysics Data System (ADS)
Smith, P. H.; Reynolds, R.; Weinberg, J.; Friedman, T.; Lemmon, M. T.; Tanner, R.; Reid, R. J.; Marcialis, R. L.; Bos, B. J.; Oquest, C.; Keller, H. U.; Markiewicz, W. J.; Kramm, R.; Gliem, F.; Rueffer, P.
2001-08-01
The Surface Stereo Imager (SSI), a stereoscopic, multispectral camera on the Mars Polar Lander, is described in terms of its capabilities for studying the Martian polar environment. The camera's two eyes, separated by 15.0 cm, provide the camera with range-finding ability. Each eye illuminates half of a single CCD detector with a field of view of 13.8° high by 14.3° wide and has 12 selectable filters between 440 and 1000 nm. The
Video-CRM: understanding customer behaviors in stores
NASA Astrophysics Data System (ADS)
Haritaoglu, Ismail; Flickner, Myron; Beymer, David
2013-03-01
This paper describes two real-time computer vision systems created 10 years ago that detect and track people in stores to obtain insights of customer behavior while shopping. The first system uses a single color camera to identify shopping groups in the checkout line. Shopping groups are identified by analyzing the inter-body distances coupled with the cashier's activities to detect checkout transactions start and end times. The second system uses multiple overhead narrow-baseline stereo cameras to detect and track people, their body posture and parts to understand customer interactions with products such as "customer picking a product from a shelf". In pilot studies both systems demonstrated real-time performance and sufficient accuracy to enable more detailed understanding of customer behavior and extract actionable real-time retail analytics.
NASA Technical Reports Server (NTRS)
Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.
1973-01-01
The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.
Low-complexity camera digital signal imaging for video document projection system
NASA Astrophysics Data System (ADS)
Hsia, Shih-Chang; Tsai, Po-Shien
2011-04-01
We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.
Design of video interface conversion system based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Heng; Wang, Xiang-jun
2014-11-01
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
2014-12-31
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of of Ares Vallis.
2014-12-10
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.
2015-01-21
The THEMIS VIS camera contains 5 filters. The data from different filters can create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows small dunes of the floor of Capen Crater in Terra Sabea.
2015-01-20
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows an unnamed crater in Utopia Planitia.
2014-12-08
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Hebes Chasma.
2015-01-14
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows an unnamed crater in Acidalia Planitia.
2015-01-07
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows a portion of Kasei Vallis.
2014-12-09
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Melas Chasma.
2014-12-11
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.
2014-12-26
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the region near Nili Fossae.
2014-12-16
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of of Eos Chasma.
2015-01-06
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows the southern flank of Ascraeus Mons.
2015-01-09
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows a region in Syrtis Major.
2015-05-08
NASA's Curiosity Mars rover recorded this view of the sun setting at the close of the mission's 956th Martian day, or sol (April 15, 2015), from the rover's location in Gale Crater. This was the first sunset observed in color by Curiosity. The image comes from the left-eye camera of the rover's Mast Camera (Mastcam). The color has been calibrated and white-balanced to remove camera artifacts. Mastcam sees color very similarly to what human eyes see, although it is actually a little less sensitive to blue than people are. Dust in the Martian atmosphere has fine particles that permit blue light to penetrate the atmosphere more efficiently than longer-wavelength colors. That causes the blue colors in the mixed light coming from the sun to stay closer to sun's part of the sky, compared to the wider scattering of yellow and red colors. The effect is most pronounced near sunset, when light from the sun passes through a longer path in the atmosphere than it does at mid-day. Malin Space Science Systems, San Diego, built and operates the rover's Mastcam. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Science Laboratory Project for NASA's Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19400
Prototype color field sequential television lens assembly
NASA Technical Reports Server (NTRS)
1974-01-01
The design, development, and evaluation of a prototype modular lens assembly with a self-contained field sequential color wheel is presented. The design of a color wheel of maximum efficiency, the selection of spectral filters, and the design of a quiet, efficient wheel drive system are included. Design tradeoffs considered for each aspect of the modular assembly are discussed. Emphasis is placed on achieving a design which can be attached directly to an unmodified camera, thus permitting use of the assembly in evaluating various candidate camera and sensor designs. A technique is described which permits maintaining high optical efficiency with an unmodified camera. A motor synchronization system is developed which requires only the vertical synchronization signal as a reference frequency input. Equations and tradeoff curves are developed to permit optimizing the filter wheel aperture shapes for a variety of different design conditions.
Optical tests for using smartphones inside medical devices
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Acobas, Jennifer K.; Phang, Ye Shang; Hassan, David; Bolton, Frank J.; Levitz, David
2018-02-01
Smartphones are currently used in many medical applications and are more frequently being integrated into medical imaging devices. The regulatory requirements in existence today however, particularly the standardization of smartphone imaging through validation and verification testing, only partially cover imaging characteristics with a smartphone. Specifically, it has been shown that smartphone camera specifications are of sufficient quality for medical imaging, and there are devices which comply with the FDA's regulatory requirements for a medical device such as a device's field of view, direction of viewing and optical resolution and optical distortion. However, these regulatory requirements do not call specifically for color testing. Images of the same object using automatic settings or different light sources can show different color composition. Experimental results showing such differences are presented. Under some circumstances, such differences in color composition could potentially lead to incorrect diagnoses. It is therefore critical to control the smartphone camera and illumination parameters properly. This paper examines different smartphone camera settings that affect image quality and color composition. To test and select the correct settings, a test methodology is proposed. It aims at evaluating and testing image color correctness and white balance settings for mobile phones and LED light sources. Emphasis is placed on color consistency and deviation from gray values, specifically by evaluating the ΔC values based on the CIEL*a*b* color space. Results show that such standardization minimizes differences in color composition and thus could reduce the risk of a wrong diagnosis.
Image quality evaluation of medical color and monochrome displays using an imaging colorimeter
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses full-frame CCDs with 100% fill factor which makes it very suitable to measure luminance and chrominance of individual LCD pixels and sub-pixels on an LCD display. The CCDs used are 14-bit thermoelectrically cooled and temperature stabilized , scientific grade.
Hyperspectral imaging using a color camera and its application for pathogen detection
NASA Astrophysics Data System (ADS)
Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary
2015-02-01
This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.
Improving color constancy by discounting the variation of camera spectral sensitivity
NASA Astrophysics Data System (ADS)
Gao, Shao-Bing; Zhang, Ming; Li, Chao-Yi; Li, Yong-Jie
2017-08-01
It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color biased images under CSS-2, without the need of burdensome acquiring of training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.
Endockscope: using mobile technology to create global point of service endoscopy.
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B; Dash, Atreya; Clayman, Ralph; Lee, Hak J
2013-09-01
Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy.
Endockscope: Using Mobile Technology to Create Global Point of Service Endoscopy
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B.; Dash, Atreya; Clayman, Ralph
2013-01-01
Abstract Background and Purpose Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Materials and Methods Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Results Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Conclusion Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy. PMID:23701228
2014-12-12
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of an unnamed crater in Tyrrhena Terra.
2015-01-16
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the floor of Pollack Crater.
2014-12-29
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Sulci Gordii east of Olympus Mons.
Becquerel Crater - False Color
2015-03-17
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the floor of Becquerel Crater.
Antoniadi Crater - False Color
2014-12-22
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the floor of Antoniadi Crater.
2014-12-30
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the flank of Hecates Tholus.
Calahorra Crater - False Color
2014-12-24
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Calahorra Crater in Chryse Planitia.
2015-01-28
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows the margin of the north polar cap.
2015-01-19
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows windstreaks on the floor of Gusev Crater.
2015-07-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the plains of Terra Cimmeria.
2015-01-05
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the caldera at the summit of Olympus Mons.
2015-05-25
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of an unnamed channel in Terra Cimmeria.
2015-05-26
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of an unnamed crater in Terra Cimmeria.
Ares Vallis Tributary - False Color
2014-12-17
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of a tributary channel that empties into Ares Vallis.
2014-12-19
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Daga Vallis on Eos Mensa.
2014-12-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the floor of Proctor Crater.
2015-01-27
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the interior of Ganges Chasma.
D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
Seo, Soo Hong; Kim, Jae Hwan; Kim, Ji Woong; Kye, Young Chul; Ahn, Hyo Hyun
2011-02-01
Digital photography can be used to measure skin color colorimetrically when combined with proper techniques. To better understand the settings of digital photography for the evaluation and measurement of skin colors, we used a tungsten lamp with filters and the custom white balance (WB) function of a digital camera. All colored squares on a color chart were photographed with each original and filtered light, analyzed into CIELAB coordinates to produce the calibration method for each given light setting, and compared statistically with reference coordinates obtained using a reflectance spectrophotometer. They were summarized as to the typical color groups, such as skin colors. We compared these results according to the fixed vs. custom WB of a digital camera. The accuracy of color measurement was improved when using light with a proper color temperature conversion filter. The skin colors from color charts could be measured more accurately using a fixed WB. In vivo measurement of skin color was easy and possible with our method and settings. The color temperature conversion filter that produced daylight-like light from the tungsten lamp was the best choice when combined with fixed WB for the measurement of colors and acceptable photographs. © 2010 John Wiley & Sons A/S.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
A dual-band adaptor for infrared imaging.
McLean, A G; Ahn, J-W; Maingi, R; Gray, T K; Roquemore, A L
2012-05-01
A novel imaging adaptor providing the capability to extend a standard single-band infrared (IR) camera into a two-color or dual-band device has been developed for application to high-speed IR thermography on the National Spherical Tokamak Experiment (NSTX). Temperature measurement with two-band infrared imaging has the advantage of being mostly independent of surface emissivity, which may vary significantly in the liquid lithium divertor installed on NSTX as compared to that of an all-carbon first wall. In order to take advantage of the high-speed capability of the existing IR camera at NSTX (1.6-6.2 kHz frame rate), a commercial visible-range optical splitter was extensively modified to operate in the medium wavelength and long wavelength IR. This two-band IR adapter utilizes a dichroic beamsplitter, which reflects 4-6 μm wavelengths and transmits 7-10 μm wavelength radiation, each with >95% efficiency and projects each IR channel image side-by-side on the camera's detector. Cutoff filters are used in each IR channel, and ZnSe imaging optics and mirrors optimized for broadband IR use are incorporated into the design. In-situ and ex-situ temperature calibration and preliminary data of the NSTX divertor during plasma discharges are presented, with contrasting results for dual-band vs. single-band IR operation.
How Phoenix Creates Color Images (Animation)
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] Click on image for animation This simple animation shows how a color image is made from images taken by Phoenix. The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists. By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Astrophysics Data System (ADS)
Takeuchi, Eric B.; Flint, Graham W.; Bergstedt, Robert; Solone, Paul J.; Lee, Dicky; Moulton, Peter F.
2001-03-01
Electronic cinema projectors are being developed that use a digital micromirror device (DMDTM) to produce the image. Photera Technologies has developed a new architecture that produces truly digital imagery using discrete pulse trains of red, green, and blue light in combination with a DMDTM where in the number of pulses that are delivered to the screen during a given frame can be defined in a purely digital fashion. To achieve this, a pulsed RGB laser technology pioneered by Q-Peak is combined with a novel projection architecture that we refer to as Laser Digital CameraTM. This architecture provides imagery wherein, during the time interval of each frame, individual pixels on the screen receive between zero and 255 discrete pulses of each color; a circumstance which yields 24-bit color. Greater color depth, or increased frame rate is achievable by increasing the pulse rate of the laser. Additionally, in the context of multi-screen theaters, a similar architecture permits our synchronously pulsed RGB source to simultaneously power three screens in a color sequential manner; thereby providing an efficient use of photons, together with the simplifications which derive from using a single DMDTM chip in each projector.
Daylight coloring for monochrome infrared imagery
NASA Astrophysics Data System (ADS)
Gabura, James
2015-05-01
The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Click on the image for 'Santa Anita' Panorama (QTVR) This color mosaic taken on May 21, 25 and 26, 2004, by the panoramic camera on NASA's Mars Exploration Rover Spirit was acquired from a position roughly three-fourths the way between 'Bonneville Crater' and the base of the 'Columbia Hills.' The area is within a low thermal inertia unit (an area that heats up and cools off quickly) identified from orbit by the Mars Odyssey thermal emission imaging system instrument. The rover was roughly 600 meters (1,968 feet) from the base of the hills. This mosaic, referred to as the 'Santa Anita Panorama,' is comprised of 64 pointings, acquired with six of the panoramic camera's color filters, including one designed specifically to allow comparisons between orbital and surface brightness data. A total of 384 images were acquired as part of this panorama. The mosaic is an approximate true-color rendering constructed from images using the camera's 750-, 530- and and 480-nanometer filters, and is presented at the full resolution of the camera.241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
WERRY, S.M.
2000-03-23
This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.
2015-06-26
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the rim and floor of Saheki Crater.
Calibration Image of Earth by Mars Color Imager
2005-08-22
Three days after the Mars Reconnaissance Orbiter Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon.
NASA Astrophysics Data System (ADS)
Hayashida, T.,; Yonai, J.; Kitamura, K.; Arai, T.; Kurita, T.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Kitagawa, S.; Hatade, K.; Yamaguchi, T.; Takeuchi, H.; Iida, K.
2008-02-01
We are advancing the development of ultrahigh-speed, high-sensitivity CCDs for broadcast use that are capable of capturing smooth slow-motion videos in vivid colors even where lighting is limited, such as at professional baseball games played at night. We have already developed a 300,000 pixel, ultrahigh-speed CCD, and a single CCD color camera that has been used for sports broadcasts and science programs using this CCD. However, there are cases where even higher sensitivity is required, such as when using a telephoto lens during a baseball broadcast or a high-magnification microscope during science programs. This paper provides a summary of our experimental development aimed at further increasing the sensitivity of CCDs using the light-collecting effects of a microlens array.
Two-Color Laser Speckle Shift Strain Measurement System
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.; Krasowski, Michael J.; Oberle, Lawrence G.; Greer, Lawrence C., III; Spina, Daniel; Barranger, John
1996-01-01
A two color laser speckle shift strain measurement system based on the technique of Yamaguchi was designed. The dual wavelength light output from an Argon Ion laser was coupled into two separate single-mode optical fibers (patchcords). The output of the patchcords is incident on the test specimen (here a structural fiber). Strain on the fiber, in one direction, is produced using an Instron 4502. Shifting interference patterns or speckle patterns will be detected at real-time rates using 2 CCD cameras with image processing performed by a hardware correlator. Strain detected in fibers with diameters from 21 microns to 143 microns is expected to be resolved to 15 mu epsilon. This system was designed to be compact and robust and does not require surface preparation of the structural fibers.
Infrared stereo calibration for unmanned ground vehicle navigation
NASA Astrophysics Data System (ADS)
Harguess, Josh; Strange, Shawn
2014-06-01
The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.
Multispectral photography for earth resources
NASA Technical Reports Server (NTRS)
Wenderoth, S.; Yost, E.; Kalia, R.; Anderson, R.
1972-01-01
A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning.
You're on Camera---in Color; A Television Handbook for Extension Workers.
ERIC Educational Resources Information Center
Tonkin, Joe
Color television has brought about new concepts of programming and new production requirements. This handbook is designed to aid those Extension workers who are concerned with or will appear on Extension television programs. The book discusses how to make the most of color, what to wear and how to apply makeup for color TV, how colors appear on…
Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.
Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica
2016-08-31
One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.
Characterization of a digital camera as an absolute tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual
2003-01-01
An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
ERIC Educational Resources Information Center
Nagle, Frederick
1981-01-01
Describes the production and use of color videocassettes with an inexpensive, conventional TV camera and an ordinary petrographic microscope. The videocassettes are used in optical mineralogy and petrology courses. (Author/WB)
Evaluation of color grading impact in restoration process of archive films
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Janout, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2016-09-01
Color grading of archive films is a very particular task in the process of their restoration. The ultimate goal of color grading here is to achieve the same look of the movie as intended at the time of its first presentation. The role of the expert restorer, expert group and a digital colorist in this complicated process is to find the optimal settings of the digital color grading system so that the resulting image look is as close as possible to the estimate of the original reference release print adjusted by the expert group of cinematographers. A methodology for subjective assessment of perceived differences between the outcomes of color grading is introduced, and results of a subjective study are presented. Techniques for objective assessment of perceived differences are discussed, and their performance is evaluated using ground truth obtained from the subjective experiment. In particular, a solution based on calibrated digital single-lens reflex camera and subsequent analysis of image features captured from the projection screen is described. The system based on our previous work is further developed so that it can be used for the analysis of projected images. It allows assessing color differences in these images and predict their impact on the perceived difference in image look.
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for explaining variation of time series of satellite based indices, and it will also benefit models describing ecosystem functioning at species or plant functional type level. With the contribution of the LIFE+ financial instrument of the European Union (LIFE12 ENV/FI/000409 Monimet, http://monimet.fmi.fi)
High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.
2000-01-01
As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.
The optical design of the G-CLEF Spectrograph: the first light instrument for the GMT
NASA Astrophysics Data System (ADS)
Ben-Ami, Sagi; Epps, Harland; Evans, Ian; Mueller, Mark; Podgorski, William; Szentgyorgyi, Andrew
2016-08-01
The GMT-Consortium Large Earth Finder (G-CLEF), the first major light instrument for the GMT, is a fiber-fed, high-resolution echelle spectrograph. In the following paper, we present the optical design of G-CLEF. We emphasize the unique solutions derived for the spectrograph fiber-feed: the Mangin mirror that corrects the cylindrical field curvature, the implementation of VPH grisms as cross dispersers, and our novel solution for a multi-colored exposure meter. We describe the spectrograph blue and red cameras comprised of 7 and 8 elements respectively, with one aspheric surface in each camera, and present the expected echellogram imaged on the instrument focal planes. Finally, we present ghost analysis and mitigation strategy that takes into account both single reflection and double reflection back scattering from various elements in the optical train.
1986-01-22
Range : 2.7 million miles (1.7 million miles) P-29497C Tis Voyager 2, false color composite of Uranus demonstrates the usefulness of special filters in the Voyager cameras for revealing the presence of high altitude hazes in Uranus' atmosphere. The picture is a composite of images obtained through the single orange and two methane filters of Voyager's wide angle camera. Orange, short wavelength and long wavelength methane images are displayed, retrospectively, as blue, green, and orange. The pink area centered on the pole is due to the presence of hazes high in the atmosphere that reflect the light before it has traversed a long enough path through the atmosphere to suffer absorbtion by methane gas. The bluest region at mid-latitude represent the most haze free regions on Uranus, thus, deeper cloud levels can be detected in these areas.
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
NASA Astrophysics Data System (ADS)
Yang, Xi; Tang, Jianwu; Mustard, John F.
2014-03-01
Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.
Streak camera based SLR receiver for two color atmospheric measurements
NASA Technical Reports Server (NTRS)
Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael
1993-01-01
To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data aggregation using normal point approach has provided accurate data fitting techniques and is found to be much more convenient than using the full rate single shot data. The systematic errors from this model have been found to be less than 3 ps for normal points.
Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture
2006-07-01
control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor
HDR imaging and color constancy: two sides of the same coin?
NASA Astrophysics Data System (ADS)
McCann, John J.
2011-01-01
At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances. Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However, on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?
Differentiating defects in red oak lumber by discriminant analysis using color, shape, and density
B. H. Bond; D. Earl Kline; Philip A. Araman
2002-01-01
Defect color, shape, and density measures aid in the differentiation of knots, bark pockets, stain/mineral streak, and clearwood in red oak, (Quercus rubra). Various color, shape, and density measures were extracted for defects present in color and X-ray images captured using a color line scan camera and an X-ray line scan detector. Analysis of variance was used to...
Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.
Quesada, Luis; León, Alejandro J
2012-10-01
Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.
Depth-aware image seam carving.
Shen, Jianbing; Wang, Dapeng; Li, Xuelong
2013-10-01
Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image of a portion of the Iani Chaos region was collected during the Southern Fall season. Image information: VIS instrument. Latitude -2.6 Longitude 342.4 East (17.6 West). 36 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 12 May 2004 This daytime visible color image was collected on June 6, 2003 during the Southern Spring season near the South Polar Cap Edge. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -77.8, Longitude 195 East (165 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image shows the wind eroded deposit in Pollack Crater called 'White Rock'. This image was collected during the Southern Fall Season. Image information: VIS instrument. Latitude -8, Longitude 25.2 East (334.8 West). 0 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Lip boundary detection techniques using color and depth information
NASA Astrophysics Data System (ADS)
Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek
2002-01-01
This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.
Thermodynamic free-energy minimization for unsupervised fusion of dual-color infrared breast images
NASA Astrophysics Data System (ADS)
Szu, Harold; Miao, Lidan; Qi, Hairong
2006-04-01
This paper presents algorithmic details of an unsupervised neural network and unbiased diagnostic methodology, that is, no lookup table is needed that labels the input training data with desired outputs. We deploy the smart algorithm on two satellite-grade infrared (IR) cameras. Although an early malignant tumor must be small in size and cannot be resolved by a single pixel that images about hundreds cells, these cells reveal themselves physiologically by emitting spontaneously thermal radiation due to the rapid cell growth angiogenesis effect (In Greek: vessels generation for increasing tumor blood supply), shifting toward, according to physics, a shorter IR wavelengths emission band. If we use those exceedingly sensitive IR spectral band cameras, we can in principle detect whether or not the breast tumor is perhaps malignant through a thin blouse in a close-up dark room. If this protocol turns out to be reliable in a large scale follow-on Vatican experiment in 2006, which might generate business investment interests of nano-engineering manufacture of nano-camera made of 1-D Carbon Nano-Tubes without traditional liquid Nitrogen coolant for Mid IR camera, then one can accumulate the probability of any type of malignant tumor at every pixel over time in the comfort of privacy without religious or other concerns. Such a non-intrusive protocol alone may not have enough information to make the decision, but the changes tracked over time will be surely becoming significant. Such an ill-posed inverse heat source transfer problem can be solved because of the universal constraint of equilibrium physics governing the blackbody Planck radiation distribution, to be spatio-temporally sampled. Thus, we must gather two snapshots with two IR cameras to form a vector data X(t) per pixel to invert the matrix-vector equation X=[A]S pixel-by-pixel independently, known as a single-pixel blind sources separation (BSS). Because the unknown heat transfer matrix or the impulse response function [A] may vary from the point tumor to its neighborhood, we could not rely on neighborhood statistics as did in a popular unsupervised independent component analysis (ICA) mathematical statistical method, we instead impose the physics equilibrium condition of the minimum of Helmholtz free-energy, H = E - T °S. In case of the point breast cancer, we can assume the constant ground state energy E ° to be normalized by those benign neighborhood tissue, and then the excited state can be computed by means of Taylor series expansion in terms of the pixel I/O data. We can augment the X-ray mammogram technique with passive IR imaging to reduce the unwanted X-rays during the chemotherapy recovery. When the sequence is animated into a movie, and the recovery dynamics is played backward in time, the movie simulates the cameras' potential for early detection without suffering the PD=0.1 search uncertainty. In summary, we applied two satellite-grade dual-color IR imaging cameras and advanced military (automatic target recognition) ATR spectrum fusion algorithm at the middle wavelength IR (3 - 5μm) and long wavelength IR (8 - 12μm), which are capable to screen malignant tumors proved by the time-reverse fashion of the animated movie experiments. On the contrary, the traditional thermal breast scanning/imaging, known as thermograms over decades, was IR spectrum-blind, and limited to a single night-vision camera and the necessary waiting for the cool down period for taking a second look for change detection suffers too many environmental and personnel variabilities.
Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System
Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica
2016-01-01
One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time. PMID:27589765
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image continues the northward trend through the Iani Chaos region. Compare this image to Monday's and Tuesday's. This image was collected during the Southern Fall season. Image information: VIS instrument. Latitude -0.1 Longitude 342.6 East (17.4 West). 19 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image is located in a different part of Aureum Chaos. Compare the surface textures with yesterday's image. This image was collected during the Southern Fall season. Image information: VIS instrument. Latitude -4.1, Longitude 333.9 East (26.1 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image was collected during Southern Fall and shows part of the Aureum Chaos. Image information: VIS instrument. Latitude -3.6, Longitude 332.9 East (27.1 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site]
The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. This false color image of an old channel floor and surrounding highlands is located in the lower reach of Mawrth Valles. This image was collected during the Northern Spring season. Image information: VIS instrument. Latitude 25.7, Longitude 341.2 East (18.8 West). 35 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 13 May 2004 This nighttime visible color image was collected on November 26, 2002 during the Northern Summer season near the North Polar Cap Edge. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude 80, Longitude 43.2 East (316.8 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Regression analysis for LED color detection of visual-MIMO system
NASA Astrophysics Data System (ADS)
Banik, Partha Pratim; Saha, Rappy; Kim, Ki-Doo
2018-04-01
Color detection from a light emitting diode (LED) array using a smartphone camera is very difficult in a visual multiple-input multiple-output (visual-MIMO) system. In this paper, we propose a method to determine the LED color using a smartphone camera by applying regression analysis. We employ a multivariate regression model to identify the LED color. After taking a picture of an LED array, we select the LED array region, and detect the LED using an image processing algorithm. We then apply the k-means clustering algorithm to determine the number of potential colors for feature extraction of each LED. Finally, we apply the multivariate regression model to predict the color of the transmitted LEDs. In this paper, we show our results for three types of environmental light condition: room environmental light, low environmental light (560 lux), and strong environmental light (2450 lux). We compare the results of our proposed algorithm from the analysis of training and test R-Square (%) values, percentage of closeness of transmitted and predicted colors, and we also mention about the number of distorted test data points from the analysis of distortion bar graph in CIE1931 color space.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Click on the image for 'Fram' in Color (QTVR) This view in approximately true color reveals details in an impact crater informally named 'Fram' in the Meridian Planum region of Mars. The picture is a mosaic of frames taken by the panoramic camera on NASA's Mars Exploration Rover Opportunity during the rover's 88th martian day on Mars, on April 23, 2004. The crater spans about 8 meters (26 feet) in diameter. Opportunity paused beside it while traveling from the rover's landing site toward a larger crater farther east. This view combines images taken using three of the camera's filters for different wavelengths of light: 750 nanometers, 530 nanometers and 430 nanometers.Endeavour on the Horizon False Color
2010-04-30
NASA Mars Exploration Rover Opportunity used its panoramic camera Pancam to capture this false-color view of the rim of Endeavour crater, the rover destination in a multi-year traverse along the sandy Martian landscape.
Eridania Planitia - False Color
2016-06-22
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Eridania Planitia.
2016-03-16
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows a hill in Tyrrhena Terra.
2016-10-17
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Gale Crater.
2016-03-07
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows an unnamed crater in Terra Sabaea.
2016-04-28
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Ophir Chasma.
2016-03-14
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Terra Sirenum.
2016-03-18
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Capri Mensa.
2016-05-02
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Peraea Cavus.
2016-03-09
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Martin Crater.
2016-04-27
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Nili Fossae.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Cost-effective poster and print production with digital camera and computer technology.
Chen, M Y; Ott, D J; Rohde, R P; Henson, E; Gelfand, D W; Boehme, J M
1997-10-01
The purpose of this report is to describe a cost-effective method for producing black-and-white prints and color posters within a radiology department. Using a high-resolution digital camera, personal computer, and color printer, the average cost of a 5 x 7 inch (12.5 x 17.5 cm) black-and-white print may be reduced from $8.50 to $1 each in our institution. The average cost for a color print (8.5 x 14 inch [21.3 x 35 cm]) varies from $2 to $3 per sheet depending on the selection of ribbons for a color-capable laser printer and the paper used. For a 30-panel, 4 x 8 foot (1.2 x 2.4 m) standard-sized poster, the cost for materials and construction is approximately $100.
NASA Technical Reports Server (NTRS)
Buck, Gregory M. (Inventor)
1989-01-01
A thermal imaging system provides quantitative temperature information and is particularly useful in hypersonic wind tunnel applications. An object to be measured is prepared by coating with a two-color, ultraviolet-activated, thermographic phosphor. The colors emitted by the phosphor are detected by a conventional color video camera. A phosphor emitting blue and green light with a ratio that varies depending on temperature is used so that the intensity of light in the blue and green wavelengths detected by the blue and green tubes in the video camera can be compared. Signals representing the intensity of blue and green light at points on the surface of a model in a hypersonic wind tunnel are used to calculate a ratio of blue to green light intensity which provides quantitative temperature information for the surface of the model.
Sunset Sequence in Mars Gale Crater Animation
2015-05-08
NASA's Curiosity Mars rover recorded this sequence of views of the sun setting at the close of the mission's 956th Martian day, or sol (April 15, 2015), from the rover's location in Gale Crater. The four images shown in sequence here were taken over a span of 6 minutes, 51 seconds. This was the first sunset observed in color by Curiosity. The images come from the left-eye camera of the rover's Mast Camera (Mastcam). The color has been calibrated and white-balanced to remove camera artifacts. Mastcam sees color very similarly to what human eyes see, although it is actually a little less sensitive to blue than people are. Dust in the Martian atmosphere has fine particles that permit blue light to penetrate the atmosphere more efficiently than longer-wavelength colors. That causes the blue colors in the mixed light coming from the sun to stay closer to sun's part of the sky, compared to the wider scattering of yellow and red colors. The effect is most pronounced near sunset, when light from the sun passes through a longer path in the atmosphere than it does at mid-day. Malin Space Science Systems, San Diego, built and operates the rover's Mastcam. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Science Laboratory Project for NASA's Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19401
Process simulation in digital camera system
NASA Astrophysics Data System (ADS)
Toadere, Florin
2012-06-01
The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.
Evaluation of Digital Camera Technology For Bridge Inspection
DOT National Transportation Integrated Search
1997-07-18
As part of a cooperative agreement between the Tennessee Department of Transportation and the Federal Highway Administration, a study was conducted to evaluate current levels of digital camera and color printing technology with regard to their applic...
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.
2015-05-01
How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.
Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board
Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae
2014-01-01
Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results. PMID:24643005
Contactless physiological signals extraction based on skin color magnification
NASA Astrophysics Data System (ADS)
Suh, Kun Ha; Lee, Eui Chul
2017-11-01
Although the human visual system is not sufficiently sensitive to perceive blood circulation, blood flow caused by cardiac activity makes slight changes on human skin surfaces. With advances in imaging technology, it has become possible to capture these changes through digital cameras. However, it is difficult to obtain clear physiological signals from such changes due to its fineness and noise factors, such as motion artifacts and camera sensing disturbances. We propose a method for extracting physiological signals with improved quality from skin colored-videos recorded with a remote RGB camera. The results showed that our skin color magnification method reveals the hidden physiological components remarkably in the time-series signal. A Korea Food and Drug Administration-approved heart rate monitor was used for verifying the resulting signal synchronized with the actual cardiac pulse, and comparisons of signal peaks showed correlation coefficients of almost 1.0. In particular, our method can be an effective preprocessing before applying additional postfiltering techniques to improve accuracy in image-based physiological signal extractions.
A simple approach to a vision-guided unmanned vehicle
NASA Astrophysics Data System (ADS)
Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye
2005-10-01
This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.
NASA Astrophysics Data System (ADS)
Marinas, Javier; Salgado, Luis; Arróspide, Jon; Camplani, Massimo
2012-01-01
In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, A. S., E-mail: alastair.moore@physics.org; Ahmed, M. F.; Soufli, R.
A dual-channel streaked soft x-ray imager has been designed and used on high energy-density physics experiments at the National Ignition Facility. This streaked imager creates two images of the same x-ray source using two slit apertures and a single shallow angle reflection from a nickel mirror. Thin filters are used to create narrow band pass images at 510 eV and 360 eV. When measuring a Planckian spectrum, the brightness ratio of the two images can be translated into a color-temperature, provided that the spectral sensitivity of the two images is well known. To reduce uncertainty and remove spectral features inmore » the streak camera photocathode from this photon energy range, a thin 100 nm CsI on 50 nm Al streak camera photocathode was implemented. Provided that the spectral shape is well-known, then uncertainties on the spectral sensitivity limits the accuracy of the temperature measurement to approximately 4.5% at 100 eV.« less
Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.
2016-04-25
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of the plains of Terra Sirenum.
2016-05-05
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of the plains of Arabia Terra.
2016-03-15
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of the plains of Terra Sirenum.
2016-05-06
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of the plains of Terra Sirenum.
Syrtis Major Planum - False Color
2016-09-09
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Syrtis Major Planum.
2016-03-11
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of the floor of Coprates Chasma.
2016-03-08
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of the plains of Terra Sabaea.
Color Image of Phoenix Lander on Mars Surface
2008-05-27
This is an enhanced-color image from Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment HiRISE camera. It shows the NASA Mars Phoenix lander with its solar panels deployed on the Mars surface
2011-12-07
This false-color view of a mineral vein called Homestake comes from the panoramic camera Pancam on NASA Mars Exploration Rover Opportunity. The vein is about the width of a thumb and about 18 inches 45 centimeters long.
1969-05-25
S69-34969 (24 May 1969) --- Astronaut Thomas P. Stafford, Apollo 10 commander, is seen in this color reproduction taken from a telecast made by the color television camera aboard the Apollo 10 spacecraft during its trans-Earth journey home.
Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo
2018-05-14
In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.
SOLAR - ASTRONOMY (APOLLO-SATURN [AS]-16)
1972-05-09
S72-36972 (21 April 1972) --- A color enhancement of a far-ultraviolet photo of Earth taken by astronaut John W. Young, commander, with the ultraviolet camera on April 21, 1972. The original black and white photo was printed on Agfacontour film three times, each exposure recording only one light level. The three light levels were then colored blue (dimmest), green (next brightest), and red (brightest). The three auroral belts, the sunlit atmosphere and the background stars (one very close to Earth, on left) can be studied quantitatively fro brightness. The UV camera was designed and built at the Naval Research Laboratory, Washington, D.C. EDITOR'S NOTE: The photographic number of the original black & white UV camera photograph from which this enhancement was made is AS16-123-19657.
Design of smartphone-based spectrometer to assess fresh meat color
NASA Astrophysics Data System (ADS)
Jung, Youngkee; Kim, Hyun-Wook; Kim, Yuan H. Brad; Bae, Euiwon
2017-02-01
Based on its integrated camera, new optical attachment, and inherent computing power, we propose an instrument design and validation that can potentially provide an objective and accurate method to determine surface meat color change and myoglobin redox forms using a smartphone-based spectrometer. System is designed to be used as a reflection spectrometer which mimics the conventional spectrometry commonly used for meat color assessment. We utilize a 3D printing technique to make an optical cradle which holds all of the optical components for light collection, collimation, dispersion, and a suitable chamber. A light, which reflects a sample, enters a pinhole and is subsequently collimated by a convex lens. A diffraction grating spreads the wavelength over the camera's pixels to display a high resolution of spectrum. Pixel values in the smartphone image are translated to calibrate the wavelength values through three laser pointers which have different wavelength; 405, 532, 650 nm. Using an in-house app, the camera images are converted into a spectrum in the visible wavelength range based on the exterior light source. A controlled experiment simulating the refrigeration and shelving of the meat has been conducted and the results showed the capability to accurately measure the color change in quantitative and spectroscopic manner. We expect that this technology can be adapted to any smartphone and used to conduct a field-deployable color spectrum assay as a more practical application tool for various food sectors.
Cheng, Victor S; Bai, Jinfen; Chen, Yazhu
2009-11-01
As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.
Real-time color measurement using active illuminant
NASA Astrophysics Data System (ADS)
Tominaga, Shoji; Horiuchi, Takahiko; Yoshimura, Akihiko
2010-01-01
This paper proposes a method for real-time color measurement using active illuminant. A synchronous measurement system is constructed by combining a high-speed active spectral light source and a high-speed monochrome camera. The light source is a programmable spectral source which is capable of emitting arbitrary spectrum in high speed. This system is the essential advantage of capturing spectral images without using filters in high frame rates. The new method of real-time colorimetry is different from the traditional method based on the colorimeter or the spectrometers. We project the color-matching functions onto an object surface as spectral illuminants. Then we can obtain the CIE-XYZ tristimulus values directly from the camera outputs at every point on the surface. We describe the principle of our colorimetric technique based on projection of the color-matching functions and the procedure for realizing a real-time measurement system of a moving object. In an experiment, we examine the performance of real-time color measurement for a static object and a moving object.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.
2011-01-01
The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.
Optical designs for the Mars '03 rover cameras
NASA Astrophysics Data System (ADS)
Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.
2001-12-01
In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.
2015-09-30
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows where Mawrth Vallis empties into Chryse Planitia.
Yuty Crater Ejecta - False Color
2016-04-26
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of the ejecta from Yuty Crater.
2016-02-01
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image captured by NASA 2001 Mars Odyssey spacecraft shows part of the plains of Terra Sabaea.
2016-02-04
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image captured by NASA 2001 Mars Odyssey spacecraft shows a group of unnamed craters north of Fournier Crater.
2015-07-27
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows part of Capri Mensa and Capri Chasma.
2010-02-16
This false-color image, taken by the panoramic camera on NASA rover Opportunity, shows the rock Chocolate Hills, perched on the rim of the 10-meter 33-foot wide Concepcion crater. This rock has a thick, dark-colored coating resembling chocolate.
Martian Rock Harrison in Color, Showing Crystals
2014-01-29
This view of a Martian rock target called /Harrison merges images from two cameras onboard NASA Curiosity Mars rover to provide both color and microscopic detail. The elongated crystals are likely feldspars, and the matrix is pyroxene-dominated.
1969-05-25
S69-34968 (24 May 1969) --- Astronaut Eugene A. Cernan, Apollo 10 lunar module pilot, is seen in this color reproduction taken from a telecast made by the color television camera aboard the Apollo 10 spacecraft during its trans-Earth journey home.
360 deg Camera Head for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.
2012-01-01
The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.
Peteye detection and correction
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Luo, Huitao; Tretter, Daniel
2007-01-01
Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.
Note: In vivo pH imaging system using luminescent indicator and color camera
NASA Astrophysics Data System (ADS)
Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko
2012-07-01
Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.
Colorful Saturn, Getting Closer
2004-06-03
As Cassini coasts into the final month of its nearly seven-year trek, the serene majesty of its destination looms ahead. The spacecraft's cameras are functioning beautifully and continue to return stunning views from Cassini's position, 1.2 billion kilometers (750 million miles) from Earth and now 15.7 million kilometers (9.8 million miles) from Saturn. In this narrow angle camera image from May 21, 2004, the ringed planet displays subtle, multi-hued atmospheric bands, colored by yet undetermined compounds. Cassini mission scientists hope to determine the exact composition of this material. This image also offers a preview of the detailed survey Cassini will conduct on the planet's dazzling rings. Slight differences in color denote both differences in ring particle composition and light scattering properties. Images taken through blue, green and red filters were combined to create this natural color view. The image scale is 132 kilometers (82 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA06060
1986-01-17
Range : 9.1 million miles (5.7 million miles) P-29478C These two images pictures of Uranus, one in true color and the other in false color, were shot by Voyager 2's narrow angle camera. The picture at left has been processed to show Uranus as the human eye would see from the vantage point of the spacecraft. The image is a composite of shots taken through blue, green, and orange filters. The darker shadings on the upper right of the disk correspond to day-night boundaries on the planet. Beyond this boundary lies the hidden northern hemisphere of Uranus, which currently remains in total darkness as the planet rotates. The blue-green color results from the aborption of red light by methane gas in Uranus' deep, cold, and remarkably clear atmosphere. The picture at right uses false color and extreme contrast to bring out subtle details in the polar region of Uranus. Images obtained through ultraviolet, violet, and orange filters were respectively converted to the same blue, green, and red colors used to produce the picture at left. The very slight contrasts visible in true color are greatly exaggerated here. In this false colr picture, Uranus reveals a dark polar hood surrounded by aseries of progressively lighter concentric bands. One possible explanation is that a brownish haze or smog, concentrated around the pole, is arranged into bands of zonal motions of the upper atmosphere. Several artifacts of the optics and processing are visible. The occasional donut shapes are shadows cast by dust in the camera optics;the processing needed to bring ot faint features also bring out camera blemishes. in addition, the bright pink strip at the lower edge of the planets limb is an artifact of the image enhancement. In fact, the limb is dark and uniform in color around the planet.
NASA Astrophysics Data System (ADS)
Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute
1998-04-01
Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System
Manduchi, R.; Coughlan, J.; Ivanchenko, V.
2016-01-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System.
Manduchi, R; Coughlan, J; Ivanchenko, V
2008-07-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed.
NASA Astrophysics Data System (ADS)
Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi
2006-02-01
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
1972-12-14
The Apollo 17 Lunar Module (LM) "Challenger" ascent stage leaves the Taurus-Littrow landing site as it makes its spectacular liftoff from the lunar surface, as seen in this reproduction taken from a color television transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle (LRV). The LRV-mounted TV camera, remotely controlled from the Mission Control Center (MCC) in Houston, made it possible for people on Earth to watch the fantastic event. The LM liftoff was at 188:01:36 ground elapsed time, 4:54:36 p.m. (CST), Thursday, December 14, 1972.
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
2017-08-11
These two views of Saturn's moon Titan exemplify how NASA's Cassini spacecraft has revealed the surface of this fascinating world. Cassini carried several instruments to pierce the veil of hydrocarbon haze that enshrouds Titan. The mission's imaging cameras also have several spectral filters sensitive to specific wavelengths of infrared light that are able to make it through the haze to the surface and back into space. These "spectral windows" have enable the imaging cameras to map nearly the entire surface of Titan. In addition to Titan's surface, images from both the imaging cameras and VIMS have provided windows into the moon's ever-changing atmosphere, chronicling the appearance and movement of hazes and clouds over the years. A large, bright and feathery band of summer clouds can be seen arcing across high northern latitudes in the view at right. These views were obtained with the Cassini spacecraft narrow-angle camera on March 21, 2017. Images taken using red, green and blue spectral filters were combined to create the natural-color view at left. The false-color view at right was made by substituting an infrared image (centered at 938 nanometers) for the red color channel. The views were acquired at a distance of approximately 613,000 miles (986,000 kilometers) from Titan. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21624
Evaluation of a hyperspectral image database for demosaicking purposes
NASA Astrophysics Data System (ADS)
Larabi, Mohamed-Chaker; Süsstrunk, Sabine
2011-01-01
We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
Noor, M Omair; Krull, Ulrich J
2014-10-21
Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an important framework for the integration of QD-FRET methods with digital imaging for a ratiometric transduction of nucleic acid hybridization on a paper-based platform.
The Mars Color Imager (MARCI) on the Mars Climate Orbiter
NASA Astrophysics Data System (ADS)
Malin, M. C.; Calvin, W.; Clancy, R. T.; Haberle, R. M.; James, P. B.; Lee, S. W.; Thomas, P. C.; Caplinger, M. A.
2001-08-01
The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (~6 × 6 × 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 × 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at
Teaching of color in predoctoral and postdoctoral dental education in 2009.
Paravina, Rade D; O'Neill, Paula N; Swift, Edward J; Nathanson, Dan; Goodacre, Charles J
2010-01-01
The goal of the study was to determine the current status of the teaching of color in dental education at both the predoctoral (Pre-D) and postdoctoral (Post-D) levels. A cross-sectional web-based survey, containing 27 multiple choice, multiple best and single best answers was created. Upon receiving the administrative approval, dental faculty involved in the teaching of color to Pre-D or Post-D dental students from around the world (N=205), were administered a survey. Statistical analysis of differences between Pre-D and Post-D was performed using Chi-square test (α=0.05). A total of 130 responses were received (response rate 63.4%); there were 70 responses from North America, 40 from Europe, 10 from South America, nine from Asia and one from Africa. A course on "color" or "color in dentistry" was included in the dental curriculum of 80% of Pre-D programs and 82% of Post-D programs. The number of hours dedicated to color-related topics was 4.0±2.4 for Pre-D and 5.5±2.9 for Post-D, respectively (p<0.01). Topics associated with tooth color, shade matching method, tooth whitening, and teaching of appearance parameters other than color, were frequently taught. Significant differences were recorded between the number of hours dedicated to teaching of color at predoctoral and postdoctoral level. The same is true for the prosthodontics and restorative courses, teaching on negative after images; color rendering index, Bleachedguide 3D-Master shade guide, digital camera and lens selection, composite resins, and maxillofacial prosthetic materials. Except for the restorative courses and composite resins, significantly higher results were recorded for Post-D programs. Vitapan Classical and 3D-Master were the most frequently taught shade guides. Copyright © 2010 Elsevier Ltd. All rights reserved.
Mount Sharp Panorama in White-Balanced Colors
2013-03-15
This mosaic of images from the Mast Camera Mastcam on NASA Mars rover Curiosity shows Mount Sharp in a white-balanced color adjustment that makes the sky look overly blue but shows the terrain as if under Earth-like lighting.
Serial-to-parallel color-TV converter
NASA Technical Reports Server (NTRS)
Doak, T. W.; Merwin, R. B.; Zuckswert, S. E.; Sepper, W.
1976-01-01
Solid analog-to-digital converter eliminates flicker and problems with time base stability and gain variation in sequential color TV cameras. Device includes 3-bit delta modulator; two-field memory; timing, switching, and sync network; and three 3-bit delta demodulators
2015-06-18
The THEMIS VIS camera contains 5 filters. Data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows the central pit of an unnamed crater south of Coprates Catena.
2016-02-05
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows a variety of surface materials in the plains of Sabaea Terra.
Wegener Crater Dunes - False Color
2016-06-23
The THEMIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows some of the dunes on the floor of Wegener Crater.
Exposed by Rocket Engine Blasts
2012-08-12
This color image from NASA Curiosity rover shows an area excavated by the blast of the Mars Science Laboratory descent stage rocket engines. This is part of a larger, high-resolution color mosaic made from images obtained by Curiosity Mast Camera.
2015-02-09
If your eyes could only see the color red, this is how Saturn's rings would look. Many Cassini color images, like this one, are taken in red light so scientists can study the often subtle color variations of Saturn's rings. These variations may reveal clues about the chemical composition and physical nature of the rings. For example, the longer a surface is exposed to the harsh environment in space, the redder it becomes. Putting together many clues derived from such images, scientists are coming to a deeper understanding of the rings without ever actually visiting a single ring particle. This view looks toward the sunlit side of the rings from about 11 degrees above the ringplane. The image was taken in red light with the Cassini spacecraft narrow-angle camera on Dec. 6, 2014. The view was acquired at a distance of approximately 870,000 miles (1.4 million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 27 degrees. Image scale is 5 miles (8 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18301
History of Hubble Space Telescope (HST)
1995-12-01
This deepest-ever view of the universe unveils myriad galaxies back to the begirning of time. Several hundred, never-before-seen, galaxies are visible in this view of the universe, called Hubble Deep Field (HDF). Besides the classical spiral and elliptical shaped galaxies, there is a bewildering variety of other galaxy shapes and colors that are important clues to understanding the evolution of the universe. Some of the galaxies may have formed less than one-billion years after the Big Bang. The image was assembled from many separate exposures with the Wide Field/Planetary Camera 2 (WF/PC2), for ten consecutive days between December 18, 1995 and December 28, 1995. This true-color view was assembled from separate images taken with blue, red, and infrared light. By combining these separate images into a single color picture, astronomers will be able to infer, at least statistically, the distance, age, and composition of galaxies in the field. Blue objects contain young stars and/or are relatively close, while redder objects contain older stellar populations and/or are farther away.
Introduction to Color Imaging Science
NASA Astrophysics Data System (ADS)
Lee, Hsien-Che
2005-04-01
Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.
Evaluation of the ImmerVision IMV1-1/3NI Panomorph Lens on a Small Unmanned Ground Vehicle (SUGV)
2013-07-01
360°. For the above reason, a 1.3-MP Chameleon color universal serial bus (USB) camera with a 1/3-in CCD from PGR was selected instead of...recommended qualified cameras to host the panomorph lens. Having the advantage of a small footprint, the Chameleon camera with the IMV1 lens can be easily
Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.
Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian
2017-05-06
The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.
View of 'Cape Verde' from 'Cape St. Mary' in Mid-Afternoon (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
As part of its investigation of 'Victoria Crater,' NASA's Mars Exploration Rover Opportunity examined a promontory called 'Cape Verde' from the vantage point of 'Cape St. Mary,' the next promontory clockwise around the crater's deeply scalloped rim. This view of Cape Verde combines several exposures taken by the rover's panoramic camera into an approximately false-color mosaic. The exposures were taken during mid-afternoon lighting conditions. The upper portion of the crater wall contains a jumble of material tossed outward by the impact that excavated the crater. This vertical cross-section through the blanket of ejected material surrounding the crater was exposed by erosion that expanded the crater outward from its original diameter, according to scientists' interpretation of the observations. Below the jumbled material in the upper part of the wall are layers that survive relatively intact from before the crater-causing impact. The images combined into this mosaic were taken during the 1,006th Martian day, or sol, of Opportunity's Mars-surface mission (Nov. 22, 2006). The panoramic camera took them through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters. The false color enhances subtle color differences among materials in the rocks and soils of the scene.View of 'Cape Verde' from 'Cape St. Mary' in Late Morning (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
As part of its investigation of 'Victoria Crater,' NASA's Mars Exploration Rover Opportunity examined a promontory called 'Cape Verde' from the vantage point of 'Cape St. Mary,' the next promontory clockwise around the crater's deeply scalloped rim. This view of Cape Verde combines several exposures taken by the rover's panoramic camera into a false-color mosaic. The exposures were taken during late-morning lighting conditions. The upper portion of the crater wall contains a jumble of material tossed outward by the impact that excavated the crater. This vertical cross-section through the blanket of ejected material surrounding the crater was exposed by erosion that expanded the crater outward from its original diameter, according to scientists' interpretation of the observations. Below the jumbled material in the upper part of the wall are layers that survive relatively intact from before the crater-causing impact. The images combined into this mosaic were taken during the 1,006th Martian day, or sol, of Opportunity's Mars-surface mission (Nov. 22, 2006). The panoramic camera took them through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters. The false color enhances subtle color differences among materials in the rocks and soils of the scene.Pohanka, Miroslav
2015-01-01
Smartphones are popular devices frequently equipped with sensitive sensors and great computational ability. Despite the widespread availability of smartphones, practical uses in analytical chemistry are limited, though some papers have proposed promising applications. In the present paper, a smartphone is used as a tool for the determination of cholinesterasemia i.e., the determination of a biochemical marker butyrylcholinesterase (BChE). The work should demonstrate suitability of a smartphone-integrated camera for analytical purposes. Paper strips soaked with indoxylacetate were used for the determination of BChE activity, while the standard Ellman’s assay was used as a reference measurement. In the smartphone-based assay, BChE converted indoxylacetate to indigo blue and coloration was photographed using the phone’s integrated camera. A RGB color model was analyzed and color values for the individual color channels were determined. The assay was verified using plasma samples and samples containing pure BChE, and validated using Ellmans’s assay. The smartphone assay was proved to be reliable and applicable for routine diagnoses where BChE serves as a marker (liver function tests; some poisonings, etc.). It can be concluded that the assay is expected to be of practical applicability because of the results’ relevance. PMID:26110404
Pohanka, Miroslav
2015-06-11
Smartphones are popular devices frequently equipped with sensitive sensors and great computational ability. Despite the widespread availability of smartphones, practical uses in analytical chemistry are limited, though some papers have proposed promising applications. In the present paper, a smartphone is used as a tool for the determination of cholinesterasemia i.e., the determination of a biochemical marker butyrylcholinesterase (BChE). The work should demonstrate suitability of a smartphone-integrated camera for analytical purposes. Paper strips soaked with indoxylacetate were used for the determination of BChE activity, while the standard Ellman's assay was used as a reference measurement. In the smartphone-based assay, BChE converted indoxylacetate to indigo blue and coloration was photographed using the phone's integrated camera. A RGB color model was analyzed and color values for the individual color channels were determined. The assay was verified using plasma samples and samples containing pure BChE, and validated using Ellmans's assay. The smartphone assay was proved to be reliable and applicable for routine diagnoses where BChE serves as a marker (liver function tests; some poisonings, etc.). It can be concluded that the assay is expected to be of practical applicability because of the results' relevance.
NASA Astrophysics Data System (ADS)
Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik
2018-05-01
Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.
New concept high-speed and high-resolution color scanner
NASA Astrophysics Data System (ADS)
Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya
2003-05-01
We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.
Study on color difference estimation method of medicine biochemical analysis
NASA Astrophysics Data System (ADS)
Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun
2006-01-01
The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.
Rock with Odd Coating Beside a Young Martian Crater, False Color
2010-03-24
This false color image from the panoramic camera on NASA Mars Exploration Rover Opportunity shows a rock called Chocolate Hills, which the rover found and examined at the edge of a young crater called Concepción.
A Set of Blast Marks in Color, Right Side
2012-08-09
This cut-out from a color panorama image taken by NASA Curiosity rover shows the effects of the descent stage rocket engines blasting the ground. It comes from the right side of the thumbnail panorama obtained the Mast Camera.
A Set of Blast Marks in Color, Left Side
2012-08-09
This cut-out from a color panorama image taken by NASA Curiosity rover shows the effects of the descent stage rocket engines blasting the ground. It comes from the left side of the thumbnail panorama obtained by Curiosity Mast Camera.
2015-09-18
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows the beginning of Ares Vallis at the edge of Iani Chaos.
Matara Crater Dunes - False Color
2017-04-20
The THEMIS camera contains 5 filters. Data from different filters can be combined in many ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows the sand sheet with surface dune forms on the floor of Matara Crater.
Calibration View of Earth and the Moon by Mars Color Imager
2005-08-22
Three days after the Mars Reconnaissance Orbiter Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon.
Preplanning and Evaluating Video Documentaries and Features.
ERIC Educational Resources Information Center
Maynard, Riley
1997-01-01
This article presents a ten-part pre-production outline and post-production evaluation that helps communications students more effectively improve video skills. Examines camera movement and motion, camera angle and perspective, lighting, audio, graphics, backgrounds and color, special effects, editing, transitions, and music. Provides a glossary…
Malkusch, Wolf
2005-01-01
The enzyme-linked immunospot (ELISPOT) assay was originally developed for the detection of individual antibody secreting B-cells. Since then, the method has been improved, and ELISPOT is used for the determination of the production of tumor necrosis factor (TNF)-alpha, interferon (IFN)-gamma, or various interleukins (IL)-4, IL-5. ELISPOT measurements are performed in 96-well plates with nitrocellulose membranes either visually or by means of image analysis. Image analysis offers various procedures to overcome variable background intensity problems and separate true from false spots. ELISPOT readers offer a complete solution for precise and automatic evaluation of ELISPOT assays. Number, size, and intensity of each single spot can be determined, printed, or saved for further statistical evaluation. Cytokine spots are always round, but because of floating edges with the background, they have a nonsmooth borderline. Resolution is a key feature for a precise detection of ELISPOT. In standard applications shape and edge steepness are essential parameters in addition to size and color for an accurate spot recognition. These parameters need a minimum spot diameter of 6 pixels. Collecting one single image per well with a standard color camera with 750 x 560 pixels will result in a resolution much too low to get all of the spots in a specimen. IFN-gamma spots may have only 25 microm diameters, and TNF-alpha spots just 15 microm. A 750 x 560 pixel image of a 6-mm well has a pixel size of 12 microm, resulting in only 1 or 2 pixel for a spot. Using a precise microscope optic in combination with a high resolution (1300 x 1030 pixel) integrating digital color camera, and at least 2 x 2 images per well will result in a pixel size of 2.5 microm and, as a minimum, 6 pixel diameter per spot. New approaches try to detect two cytokines per cell at the same time (i.e., IFN-gamma and IL-5). Standard staining procedures produce brownish spots (horseradish peroxidase) and blue spots (alkaline phosphatase). Problems may occur with color overlaps from cells producing both cytokines, resulting in violet spots. The latest experiments therefore try to use fluorescence labels as a marker. Fluorescein isothiocyanate results in green spots and Rhodamine in red spots. Cells producing both cytokines appear yellow. These colors can be separated much easier than the violet, red, and blue, especially using a high resolution.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Digital camera auto white balance based on color temperature estimation clustering
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong
2010-11-01
Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.
2015-02-27
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Melas Chasma. Orbit Number: 4622 Latitude: -12.797 Longitude: 288.629 Instrument: VIS Captured: 2002-12-30 00:28 http://photojournal.jpl.nasa.gov/catalog/PIA19218
Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J.
2016-01-01
Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant’s response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807
NASA Astrophysics Data System (ADS)
Taj-Eddin, Islam A. T. F.; Afifi, Mahmoud; Korashy, Mostafa; Ahmed, Ali H.; Cheng, Ng Yoke; Hernandez, Evelyng; Abdel-Latif, Salma M.
2017-11-01
Plant aliveness is proven through laboratory experiments and special scientific instruments. We aim to detect the degree of animation of plants based on the magnification of the small color changes in the plant's green leaves using the Eulerian video magnification. Capturing the video under a controlled environment, e.g., using a tripod and direct current light sources, reduces camera movements and minimizes light fluctuations; we aim to reduce the external factors as much as possible. The acquired video is then stabilized and a proposed algorithm is used to reduce the illumination variations. Finally, the Euler magnification is utilized to magnify the color changes on the light invariant video. The proposed system does not require any special purpose instruments as it uses a digital camera with a regular frame rate. The results of magnified color changes on both natural and plastic leaves show that the live green leaves have color changes in contrast to the plastic leaves. Hence, we can argue that the color changes of the leaves are due to biological operations, such as photosynthesis. To date, this is possibly the first work that focuses on interpreting visually, some biological operations of plants without any special purpose instruments.
Webcam camera as a detector for a simple lab-on-chip time based approach.
Wongwilai, Wasin; Lapanantnoppakhun, Somchai; Grudpan, Supara; Grudpan, Kate
2010-05-15
A modification of a webcam camera for use as a small and low cost detector was demonstrated with a simple lab-on-chip reactor. Real time continuous monitoring of the reaction zone could be done. Acid-base neutralization with phenolphthalein indicator was used as a model reaction. The fading of pink color of the indicator when the acidic solution diffused into the basic solution zone was recorded as the change of red, blue and green colors (%RBG.) The change was related to acid concentration. A low cost portable semi-automation analysis system was achieved.
Mosaic of Apollo 16 Descartes landing site taken from TV transmission
NASA Technical Reports Server (NTRS)
1972-01-01
A 360 degree field of view of the Apollo 16 Descartes landing site area composed of individual scenes taken from a color transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle. This panorama was made while the LRV was parked at the rim of Flag Crater (Station 1) during the first Apollo 16 lunar surface extravehicular activity (EVA-1) by Astronauts John W. Young and Charles M. Duke Jr. The overlay identifies the directions and the key lunar terrain features. The camera panned across the rear portion of the LRV in its 360 degree sweep.
Color Image of Phoenix Lander on Mars Surface
NASA Technical Reports Server (NTRS)
2008-01-01
This is an enhanced-color image from Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment (HiRISE) camera. It shows the Phoenix lander with its solar panels deployed on the Mars surface. The spacecraft appears more blue than it would in reality. The blue/green and red filters on the HiRISE camera were used to make this picture. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.An Illumination-Adaptive Colorimetric Measurement Using Color Image Sensor
NASA Astrophysics Data System (ADS)
Lee, Sung-Hak; Lee, Jong-Hyub; Sohng, Kyu-Ik
An image sensor for a use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between RGB output signals and CIE XYZ tristimulus values. This paper proposes an adaptive measuring method to obtain the chromaticity of colored scenes and illumination through a 3×3 camera transfer matrix under a certain illuminant. Camera RGB outputs, sensor status values, and photoelectric characteristic are used to obtain the chromaticity. Experimental results show that the proposed method is valid in the measuring performance.
NASA Technical Reports Server (NTRS)
2004-01-01
The microscopic imager (circular device in center) is in clear view above the surface at Meridiani Planum, Mars, in this approximate true-color image taken by the panoramic camera on the Mars Exploration Rover Opportunity. The image was taken on the 9th sol of the rover's journey. The microscopic imager is located on the rover's instrument deployment device, or arm. The arrow is pointing to the lens of the instrument. Note the dust cover, which flips out to the left of the lens, is open. This approximated color image was created using the camera's violet and infrared filters as blue and red.
2015-12-04
The THEMIS VIS camera contains 5 filters. Data from the filters can be combined in many ways to create a false color image. This image from NASA 2001 Mars Odyssey spacecraft shows the region just west of the dune/polar cap image from earlier this week.
Investigating Mars: Arabia Terra Dunes
2018-03-23
This is a false color image of the dune field in the Arabia Terra crater. In this combination of bands, sand appears as a blue to dark blue color. In this image, the smaller areas of sand are easily visible and indicate the large amount of available material for creating dunes. Located in eastern Arabia is an unnamed crater, 120 kilometers (75 miles) across. The floor of this crater contains a large exposure of rocky material, a field of dark sand dunes, and numerous patches of what is probably fine-grain sand. The shape of the dunes indicate that prevailing winds have come from different directions over the years. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 71,000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 45125 Latitude: 26.6761 Longitude: 62.9345 Instrument: VIS Captured: 2012-02-15 20:32 https://photojournal.jpl.nasa.gov/catalog/PIA22302
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site]
Released 28 May 2004 This image was collected February 29, 2004 during the end of southern summer season. The local time at the location of the image was about 2 pm. The image shows an area in the South Polar region. The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation. Image information: VIS instrument. Latitude -84.7, Longitude 9.3 East (350.7 West). 38 meter/pixel resolution. Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.Mosaic of Apollo 16 Descartes landing site taken from TV transmission
NASA Technical Reports Server (NTRS)
1972-01-01
A 360 degree field of view of the Apollo 16 Descartes landing site area composed of individual scenes taken from a color transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle. This panorama was made while the LRV was parked at the rim of North Ray crater (Stations 11 and 12) during the third Apollo 16 lunar surface extravehicular activity (EVA-3) by Astronauts John W. Young and Charles M. Duke Jr. The overlay identifies the directions and the key lunar terrain features. The camera panned across the rear portion of the LRV in its 360 degree sweep. Note Young and Duke walking along the edge of the crater in one of the scenes. The TV camera was remotely controlled from a console in the Mission Control Center.
Compact full-motion video hyperspectral cameras: development, image processing, and applications
NASA Astrophysics Data System (ADS)
Kanaev, A. V.
2015-10-01
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.
NASA Technical Reports Server (NTRS)
2005-01-01
This false color image of Saturn's moon Mimas reveals variation in either the composition or texture across its surface. During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles). This image is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined with a single black and white picture that isolates and maps regional color differences to create the final product. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green. Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of the image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil. This image was obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .Payload topography camera of Chang'e-3
NASA Astrophysics Data System (ADS)
Yu, Guo-Bin; Liu, En-Hai; Zhao, Ru-Jin; Zhong, Jie; Zhou, Xiang-Dong; Zhou, Wu-Lin; Wang, Jin; Chen, Yuan-Pei; Hao, Yong-Jie
2015-11-01
Chang'e-3 was China's first soft-landing lunar probe that achieved a successful roving exploration on the Moon. A topography camera functioning as the lander's “eye” was one of the main scientific payloads installed on the lander. It was composed of a camera probe, an electronic component that performed image compression, and a cable assembly. Its exploration mission was to obtain optical images of the lunar topography in the landing zone for investigation and research. It also observed rover movement on the lunar surface and finished taking pictures of the lander and rover. After starting up successfully, the topography camera obtained static images and video of rover movement from different directions, 360° panoramic pictures of the lunar surface around the lander from multiple angles, and numerous pictures of the Earth. All images of the rover, lunar surface, and the Earth were clear, and those of the Chinese national flag were recorded in true color. This paper describes the exploration mission, system design, working principle, quality assessment of image compression, and color correction of the topography camera. Finally, test results from the lunar surface are provided to serve as a reference for scientific data processing and application.
2015-09-24
This cylindrical projection map of Pluto, in enhanced, extended color, is the most detailed color map of Pluto ever made by NASA New Horizons. It uses recently returned color imagery from the New Horizons Ralph camera, which is draped onto a base map of images from the NASA's spacecraft's Long Range Reconnaissance Imager (LORRI). The map can be zoomed in to reveal exquisite detail with high scientific value. Color variations have been enhanced to bring out subtle differences. Colors used in this map are the blue, red, and near-infrared filter channels of the Ralph instrument. http://photojournal.jpl.nasa.gov/catalog/PIA19956
Investigating Mars: Nili and Meroe Paterae
2017-10-27
This false color image covers the region from Nili Patera at the top of the frame to the dunes near Meroe Patera (which is off the bottom of the image). High resolution imaging by other spacecraft has revealed that the dunes in this region are moving. Winds are blowing the dunes across a rough surface of regional volcanic lava flows. The paterae are calderas on the volcanic complex called Syrtis Major Planum. Dunes are found in both Nili and Meroe Paterae and in the region between the two calderas. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 61810 Latitude: 8.37503 Longitude: 67.4659 Instrument: VIS Captured: 2015-11-20 04:48 https://photojournal.jpl.nasa.gov/catalog/PIA22015
Investigating Mars: Moreux Crater
2017-11-22
This image of Moreux Crater shows the western floor of the crater and the multitude of sand dunes that are found on the floor of the crater. A large sand sheet with surface dunes forms is located at the top of the image, and smaller individual dunes stretch from the bottom of the sand sheet to the bottom of the image. In this false color image sand dunes are "blue". Moreux Crater is located in northern Arabia Terra and has a diameter of 138 kilometers. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 10384 Latitude: 41.841 Longitude: 44.087 Instrument: VIS Captured: 2004-04-17 10:07 https://photojournal.jpl.nasa.gov/catalog/PIA22035
Investigating Mars: Nili and Meroe Paterae
2017-10-18
This is a false color image of part of the Nili Patera dune field. High resolution imaging by other spacecraft has revealed that the dunes in this region are moving. Winds are blowing the dunes across a rough surface of regional volcanic lava flows. The paterae are calderas on the volcanic complex called Syrtis Major Planum. Dunes are found in both Nili and Meroe Paterae and in the region between the two calderas. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 19306 Latitude: 8.80756 Longitude: 67.4616 Instrument: VIS Captured: 2006-04-22 00:12 https://photojournal.jpl.nasa.gov/catalog/PIA22008
Investigating Mars: Moreux Crater
2017-11-23
This image of Moreux Crater shows the eastern side of the central peak, as well as the nearby sand dunes. In this false color image sand dunes are "blue". Smaller patches of blue are located on the central peak materials and indicate where surface winds have moved fine materials on/off the peak deposits. The pitted and curvilinear morphology of the central peak deposits have been interpreted to have formed by glacial activity. Moreux Crater is located in northern Arabia Terra and has a diameter of 138 kilometers. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 12518 Latitude: 41.8223 Longitude: 44.7638 Instrument: VIS Captured: 2004-10-10 02:55 https://photojournal.jpl.nasa.gov/catalog/PIA22126
Investigating Mars: Nili and Meroe Paterae
2017-10-19
This is a false color image of part of the Nili Patera dune field. High resolution imaging by other spacecraft has revealed that the dunes in this region are moving. Winds are blowing the dunes across a rough surface of regional volcanic lava flows. The paterae are calderas on the volcanic complex called Syrtis Major Planum. Dunes are found in both Nili and Meroe Paterae and in the region between the two calderas. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 48021 Latitude: 8.95091 Longitude: 67.3366 Instrument: VIS Captured: 2012-10-11 05:22 https://photojournal.jpl.nasa.gov/catalog/PIA22009
Investigating Mars: Moreux Crater
2017-11-24
This image of Moreux Crater shows the highest elevations of the central peak, as well as the nearby sand dunes. In this false color image sand dunes are "blue". Smaller patches of blue are located on the central peak materials and indicate where surface winds have moved fine materials on/off the peak deposits. The pitted and curvilinear morphology of the central peak deposits have been interpreted to have formed by glacial activity. Moreux Crater is located in northern Arabia Terra and has a diameter of 138 kilometers. The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 69000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 46786 Latitude: 41.7667 Longitude: 44.3482 Instrument: VIS Captured: 2012-07-01 13:41 https://photojournal.jpl.nasa.gov/catalog/PIA22127
Colorimetric calibration of wound photography with off-the-shelf devices
NASA Astrophysics Data System (ADS)
Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.
Small Unmanned Aerial Vehicles; DHS’s Answer to Border Surveillance Requirements
2013-03-01
5 of more than 4000 illegal aliens, including the seizure of more than 15,000 pounds of marijuana .13 In addition to the Predator UAVs being...payload includes two color video cameras, an infrared camera that offers night vision capability and synthetic aperture radar that provides high
1996-01-01
used to locate and characterize a magnetic dipole source, and this finding accelerated the development of superconducting tensor gradiometers for... superconducting magnetic field gradiometer, two-color infrared camera, synthetic aperture radar, and a visible spectrum camera. The combination of these...Pieter Hoekstra, Blackhawk GeoSciences ......................................... 68 Prediction for UXO Shape and Orientation Effects on Magnetic
Identifying People with Soft-Biometrics at Fleet Week
2013-03-01
onboard sensors. This included: Color Camera: Located in the right eye, Octavia stored 640x480 RGB images at ~4 Hz from a Point Grey Firefly camera. A...Face Detection The Fleet Week experiments demonstrated the potential of soft biometrics for recognition, but all of the existing algorithms currently
Ultraviolet Viewing with a Television Camera.
ERIC Educational Resources Information Center
Eisner, Thomas; And Others
1988-01-01
Reports on a portable video color camera that is fully suited for seeing ultraviolet images and offers some expanded viewing possibilities. Discusses the basic technique, specialized viewing, and the instructional value of this system of viewing reflectance patterns of flowers and insects that are invisible to the unaided eye. (CW)
Real-time people counting system using a single video camera
NASA Astrophysics Data System (ADS)
Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain
2008-02-01
There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.
Lights, Camera, Spectroscope! The Basics of Spectroscopy Disclosed Using a Computer Screen
ERIC Educational Resources Information Center
Garrido-Gonza´lez, Jose´ J.; Trillo-Alcala´, María; Sa´nchez-Arroyo, Antonio J.
2018-01-01
The generation of secondary colors in digital devices by means of the additive red, green, and blue color model (RGB) can be a valuable way to introduce students to the basics of spectroscopy. This work has been focused on the spectral separation of secondary colors of light emitted by a computer screen into red, green, and blue bands, and how the…
NASA Astrophysics Data System (ADS)
Tanada, Jun
1992-08-01
Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.
Spirit Beholds Bumpy Boulder (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
As NASA's Mars Exploration Rover Spirit began collecting images for a 360-degree panorama of new terrain, the rover captured this view of a dark boulder with an interesting surface texture. The boulder sits about 40 centimeters (16 inches) tall on Martian sand about 5 meters (16 feet) away from Spirit. It is one of many dark, volcanic rock fragments -- many pocked with rounded holes called vesicles -- littering the slope of 'Low Ridge.' The rock surface facing the rover is similar in appearance to the surface texture on the outside of lava flows on Earth. Spirit took this false-color image with the panoramic camera on the rover's 810th sol, or Martian day, of exploring Mars (April 13, 2006). This image is a false-color rendering using camera's 753-nanometer, 535-nanometer, and 432-nanometer filters.NASA Technical Reports Server (NTRS)
2004-01-01
This image mosaic illustrates how scientists use the color calibration targets (upper left) located on both Mars Exploration Rovers to fine-tune the rovers' sense of color. In the center, spectra, or light signatures, acquired in the laboratory of the colored chips on the targets are shown as lines. Actual data from Mars Exploration Rover Spirit's panoramic camera is mapped on top of these lines as dots. The plot demonstrates that the observed colors of Mars match the colors of the chips, and thus approximate the red planet's true colors. This finding is further corroborated by the picture taken on Mars of the calibration target, which shows the colored chips as they would appear on Earth.
1996-01-29
In this false color image of Neptune, objects that are deep in the atmosphere are blue, while those at higher altitudes are white. The image was taken by Voyager 2 wide-angle camera through an orange filter and two different methane filters. http://photojournal.jpl.nasa.gov/catalog/PIA00051
Utilization of Android-base Smartphone to Support Handmade Spectrophotometer : A Preliminary Study
NASA Astrophysics Data System (ADS)
Ujiningtyas, R.; Apriliani, E.; Yohana, I.; Afrillianti, L.; Hikmah, N.; Kurniawan, C.
2018-04-01
Visible spectrophotometer is a powerful instrument in chemistry. We can identify the chemical species base on their specific color and then we can also determine the amount of the species using the spectrophotometer. However, the availability of visible spectrophotometer still limited, particularly for education. This affect the skill of student to have experience on handling the instrumentation. On the other hand, the communication technology creates an opportunity for student to explore their smart feature, mainly the camera. The objective of this research is to make an application that utilize the camera feature as a detector for handmade visible spectrophotometer. The software have been made based on android program, and we name it as Spectrophone®. The spectrophotometer consists of an acrylic body, sample compartment, and light sources (USB-LED lamp powered by 6600 mAh battery). Before reach the sample, the light source was filtered using colored-mica plastic. The spectrophone® apps utilize the camera to detect the color based on its RGB composition. A different colored solution will show a different RGB composition based on the concentration and specific absorbance wavelength. We then can choose one type of color composition, R or G or B only to be converted as an absorbance using -Log (Cs/Co), where Cs and Co are color composition of sample and blank, respectively. The calibration curve of metilen blue measured. In a red (R) composition, the regression is not linear (R2=0.78) compare to the result of UV-Vis spectrophotomer model Spectroquant Pharo 300 (R2=0.8053). This measurement result shows that The Spectrophone® still need to be evaluated and corrected. One problem than can we identify that the diameter of pick point of RGB composition is too wide and this will affect the reading color composition. Next, we will fix the problem and in advance we will apply this Spectrophone® in a wide scale.
People counting and re-identification using fusion of video camera and laser scanner
NASA Astrophysics Data System (ADS)
Ling, Bo; Olivera, Santiago; Wagley, Raj
2016-05-01
We present a system for people counting and re-identification. It can be used by transit and homeland security agencies. Under FTA SBIR program, we have developed a preliminary system for transit passenger counting and re-identification using a laser scanner and video camera. The laser scanner is used to identify the locations of passenger's head and shoulder in an image, a challenging task in crowed environment. It can also estimate the passenger height without prior calibration. Various color models have been applied to form color signatures. Finally, using a statistical fusion and classification scheme, passengers are counted and re-identified.
NASA Technical Reports Server (NTRS)
Pelletier, R. E.; Hudnall, W. H.
1987-01-01
The use of Space Shuttle Large Format Camera (LFC) color, IR/color, and B&W images in large-scale soil mapping is discussed and illustrated with sample photographs from STS 41-6 (October 1984). Consideration is given to the characteristics of the film types used; the photographic scales available; geometric and stereoscopic factors; and image interpretation and classification for soil-type mapping (detecting both sharp and gradual boundaries), soil parent material topographic and hydrologic assessment, natural-resources inventory, crop-type identification, and stress analysis. It is suggested that LFC photography can play an important role, filling the gap between aerial and satellite remote sensing.
NASA Technical Reports Server (NTRS)
Dillman, R. D.; Eav, B. B.; Baldwin, R. R.
1984-01-01
The Office of Space and Terrestrial Applications-3 payload, scheduled for flight on STS Mission 17, consists of four earth-observation experiments. The Feature Identification and Location Experiment-1 will spectrally sense and numerically classify the earth's surface into water, vegetation, bare earth, and ice/snow/cloud-cover, by means of spectra ratio techniques. The Measurement of Atmospheric Pollution from Satellite experiment will measure CO distribution in the middle and upper troposphere. The Imaging Camera-B uses side-looking SAR to create two-dimensional images of the earth's surface. The Large Format Camera/Attitude Reference System will collect metric quality color, color-IR, and black-and-white photographs for topographic mapping.
NASA Technical Reports Server (NTRS)
Barnes, J. C. (Principal Investigator); Smallwood, M. D.; Cogan, J. L.
1975-01-01
The author has identified the following significant results. Of the four black and white S190A camera stations, snowcover is best defined in the two visible spectral bands, due in part to their better resolution. The overall extent of the snow can be mapped more precisely, and the snow within shadow areas is better defined in the visible bands. Of the two S190A color products, the aerial color photography is the better. Because of the contrast in color between snow and snow-free terrain and the better resolution, this product is concluded to be the best overall of the six camera stations for detecting and mapping snow. Overlapping frames permit stereo viewing, which aids in distinguishing clouds from the underlying snow. Because of the greater spatial resolution of the S190B earth terrain camera, areal snow extent can be mapped in greater detail than from the S190A photographs. The snow line elevation measured from the S190A and S190B photographs is reasonable compared to the meager ground truth data available.
ColorChecker at the beach: dangers of sunburn and glare
NASA Astrophysics Data System (ADS)
McCann, John
2014-01-01
In High-Dynamic-Range (HDR) imaging, optical veiling glare sets the limits of accurate scene information recorded by a camera. But, what happens at the beach? Here we have a Low-Dynamic-Range (LDR) scene with maximal glare. Can we calibrate a camera at the beach and not be burnt? We know that we need sunscreen and sunglasses, but what about our cameras? The effect of veiling glare is scene-dependent. When we compare RAW camera digits with spotmeter measurements we find significant differences. As well, these differences vary, depending on where we aim the camera. When we calibrate our camera at the beach we get data that is valid for only that part of that scene. Camera veiling glare is an issue in LDR scenes in uniform illumination with a shaded lens.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
Selecting a digital camera for telemedicine.
Patricoski, Chris; Ferguson, A Stewart
2009-06-01
The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.
Web Camera Use in Developing Biology, Molecular Biology and Biochemistry Laboratories
ERIC Educational Resources Information Center
Ogren, Paul J.; Deibel, Michael; Kelly, Ian; Mulnix, Amy B.; Peck, Charlie
2004-01-01
The use of a network-ready color camera is described which is primarily marketed as a security device and is used for experiments in developmental biology, genetics and biochemistry laboratories and in special student research projects. Acquiring and analyzing project and archiving images is very important in microscopy, electrophoresis and…
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Single-shot dual-wavelength in-line and off-axis hybrid digital holography
NASA Astrophysics Data System (ADS)
Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie
2018-02-01
We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
Monitoring environmental change with color slides
Arthur W. Magill
1989-01-01
Monitoring human impact on outdoor recreation sites and view landscapes is necessary to evaluate influences which may require corrective action and to determine if management is achieving desired goals. An inexpensive method to monitor environmental change is to establish camera points and use repeat color slides. Successful monitoring from slides requires the observer...
1986-01-14
Range : 12.9 million miles (8.0 million miles) P-29468C This false color Voyager photograph of Uranus shows a discrete cloud seen as a bright streak near the planets limb. The cloud visible here is the most prominent feature seen in a series of Voyager images designed to track atmospheric motions. The occasional donut shaped features, including one at the bottom, are shadows cast by dust on the camera optics. The picture is a highly processed composite of three images. The processing necessary to bring out the faint features on the planet also brings out these camera blemishes. The three seperate images used where shot through violet, blue, and orange filters. Each color image showd the cloud to a different degree; because they were not exposed at the same time , the images were processed to provide a good spatial match. In a true color image, the cloud would be barely discernable; the false color helps to bring out additional details. The different colors imply variations in vertical structure, but as of yet it is not possible to be specific about such differences. One possiblity is that the uranian atmosphere may contain smog like constituents, in which case some color differences may represent differences in how these molecules are distributed.
How to characterize terrains on 4 Vesta using Dawn Framing Camera color bands?
NASA Astrophysics Data System (ADS)
Le Corre, Lucille; Reddy, Vishnu; Nathues, Andreas; Cloutis, Edward A.
2011-12-01
We present methods for terrain classification on 4 Vesta using Dawn Framing Camera (FC) color information derived from laboratory spectra of HED meteorites and other Vesta-related assemblages. Color and spectral parameters have been derived using publicly available spectra of these analog materials to identify the best criteria for distinguishing various terrains. We list the relevant parameters for identifying eucrites, diogenites, mesosiderites, pallasites, clinopyroxenes and olivine + orthopyroxene mixtures using Dawn FC color cubes. Pseudo Band I minima derived by fitting a low order polynomial to the color data are found to be useful for extracting the pyroxene chemistry. Our investigation suggests a good correlation (R2 = 0.88) between laboratory measured ferrosilite (Fs) pyroxene chemistry vs. those from pseudo Band I minima using equations from Burbine et al. (Burbine, T.H., Buchanan, P.C., Dolkar, T., Binzel, R.P. [2009]. Planetary Science 44, 1331-1341). The pyroxene chemistry information is a complementary terrain classification capability beside the color ratios. We also investigated the effects of exogenous material (i.e., CM2 carbonaceous chondrites) on the spectra of HEDs using laboratory mixtures of these materials. Our results are the basis for an automated software pipeline that will allow us to classify terrains on 4 Vesta efficiently.
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
NASA Technical Reports Server (NTRS)
Nelson, David L.; Diner, David J.; Thompson, Charles K.; Hall, Jeffrey R.; Rheingans, Brian E.; Garay, Michael J.; Mazzoni, Dominic
2010-01-01
MISR (Multi-angle Imaging SpectroRadiometer) INteractive eXplorer (MINX) is an interactive visualization program that allows a user to digitize smoke, dust, or volcanic plumes in MISR multiangle images, and automatically retrieve height and wind profiles associated with those plumes. This innovation can perform 9-camera animations of MISR level-1 radiance images to study the 3D relationships of clouds and plumes. MINX also enables archiving MISR aerosol properties and Moderate Resolution Imaging Spectroradiometer (MODIS) fire radiative power along with the heights and winds. It can correct geometric misregistration between cameras by correlating off-nadir camera scenes with corresponding nadir scenes and then warping the images to minimize the misregistration offsets. Plots of BRF (bidirectional reflectance factor) vs. camera angle for points clicked in an image can be displayed. Users get rapid access to map views of MISR path and orbit locations and overflight dates, and past or future orbits can be identified that pass over a specified location at a specified time. Single-camera, level-1 radiance data at 1,100- or 275- meter resolution can be quickly displayed in color using a browse option. This software determines the heights and motion vectors of features above the terrain with greater precision and coverage than previous methods, based on an algorithm that takes wind direction into consideration. Human interpreters can precisely identify plumes and their extent, and wind direction. Overposting of MODIS thermal anomaly data aids in the identification of smoke plumes. The software has been used to preserve graphical and textural versions of the digitized data in a Web-based database.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
LROC WAC Ultraviolet Reflectance of the Moon
NASA Astrophysics Data System (ADS)
Robinson, M. S.; Denevi, B. W.; Sato, H.; Hapke, B. W.; Hawke, B. R.
2011-10-01
Earth-based color filter photography, first acquired in the 1960s, showed color differences related to morphologic boundaries on the Moon [1]. These color units were interpreted to indicate compositional differences, thought to be the result of variations in titanium content [1]. Later it was shown that iron abundance (FeO) also plays a dominant role in controlling color in lunar soils [2]. Equally important is the maturity of a lunar soil in terms of its reflectance properties (albedo and color) [3]. Maturity is a measure of the state of alteration of surface materials due to sputtering and high velocity micrometeorite impacts over time [3]. The Clementine (CL) spacecraft provided the first global and digital visible through infrared observations of the Moon [4]. This pioneering dataset allowed significant advances in our understanding of compositional (FeO and TiO2) and maturation differences across the Moon [5,6]. Later, the Lunar Prospector (LP) gamma ray and neutron experiments provided the first global, albeit low resolution, elemental maps [7]. Newly acquired Moon Mineralogic Mapper hyperspectral measurements are now providing the means to better characterize mineralogic variations on a global scale [8]. Our knowledge of ultraviolet color differences between geologic units is limited to low resolution (km scale) nearside telescopic observations, and high resolution Hubble Space Telescope images of three small areas [9], and laboratory analyses of lunar materials [10,11]. These previous studies detailed color differences in the UV (100 to 400 nm) related to composition and physical state. HST UV (250 nm) and visible (502 nm) color differences were found to correlate with TiO2, and were relatively insensitive to maturity effects seen in visible ratios (CL) [9]. These two results led to the conclusion that improvements in TiO2 estimation accuracy over existing methods may be possible through a simple UV/visible ratio [9]. The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) provides the first global lunar ultraviolet through visible (321 nm to 689 nm) multispectral observations [12]. The WAC is a sevencolor push-frame imager with nominal resolutions of 400 m (321, 360 nm) and 100 m (415, 566, 604, 643, 689 nm). Due to its wide field-of-view (60° in color mode) the phase angle within a single line varies ±30°, thus requiring the derivation of a precise photometric characterization [13] before any interpretations of lunar reflectance properties can be made. The current WAC photometric correction relies on multiple WAC observations of the same area over a broad range of phase angles and typically results in relative corrections good to a few percent [13].
1990-02-14
Range : 4 billion miles from Earth, at 32 degrees to the ecliptic. P-36057C This color image of the Sun, Earth, and Venus is one of the first, and maybe, only images that show are solar system from such a vantage point. The image is a portion of a wide angle image containing the sun and the region of space where the Earth and Venus were at the time, with narrow angle cameras centered on each planet. The wide angle was taken with the cameras darkest filter, a methane absorption band, and the shortest possible exposure, one two-hundredth of a second, to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky, as seen from Voyager's perpective at the edge of the solar system. Yet, it is still 8xs brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics of the camera. The rays around th sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. the 2 narrow angle frames containing the images of the Earth and Venus have been digitally mosaicked into the wide angle image at the appropriate scale. These images were taken through three color filters and recombined to produce the color image. The violet, green, and blue filters used , as well as exposure times of .72,.48, and .72 for Earth, and .36, .24, and .36 for Venus.The images also show long linear streaks resulting from scatering of sulight off parts of the camera and its shade.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.
2017-07-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.
Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.
Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q
2010-10-01
Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.
[A Method for Selecting Self-Adoptive Chromaticity of the Projected Markers].
Zhao, Shou-bo; Zhang, Fu-min; Qu, Xing-hua; Zheng, Shi-wei; Chen, Zhe
2015-04-01
The authors designed a self-adaptive projection system which is composed of color camera, projector and PC. In detail, digital micro-mirror device (DMD) as a spatial light modulator for the projector was introduced in the optical path to modulate the illuminant spectrum based on red, green and blue light emitting diodes (LED). However, the color visibility of active markers is affected by the screen which has unknown reflective spectrum as well. Here active markers are projected spot array. And chromaticity feature of markers is sometimes submerged in similar spectral screen. In order to enhance the color visibility of active markers relative to screen, a method for selecting self-adaptive chromaticity of the projected markers in 3D scanning metrology is described. Color camera with 3 channels limits the accuracy of device characterization. For achieving interconversion of device-independent color space and device-dependent color space, high-dimensional linear model of reflective spectrum was built. Prior training samples provide additional constraints to yield high-dimensional linear model with more than three degrees of freedom. Meanwhile, spectral power distribution of ambient light was estimated. Subsequently, markers' chromaticity in CIE color spaces was selected via maximization principle of Euclidean distance. The setting values of RGB were easily estimated via inverse transform. Finally, we implemented a typical experiment to show the performance of the proposed approach. An 24 Munsell Color Checker was used as projective screen. Color difference in the chromaticity coordinates between the active marker and the color patch was utilized to evaluate the color visibility of active markers relative to the screen. The result comparison between self-adaptive projection system and traditional diode-laser light projector was listed and discussed to highlight advantage of our proposed method.
Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo
2018-01-17
Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.
NASA Technical Reports Server (NTRS)
2005-01-01
False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface. During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles). The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left. The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green. Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil. The images were obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .The Rich Color Variations of Pluto
2015-09-24
NASA's New Horizons spacecraft captured this high-resolution enhanced color view of Pluto on July 14, 2015. The image combines blue, red and infrared images taken by the Ralph/Multispectral Visual Imaging Camera (MVIC). Pluto's surface sports a remarkable range of subtle colors, enhanced in this view to a rainbow of pale blues, yellows, oranges, and deep reds. Many landforms have their own distinct colors, telling a complex geological and climatological story that scientists have only just begun to decode. The image resolves details and colors on scales as small as 0.8 miles (1.3 kilometers). http://photojournal.jpl.nasa.gov/catalog/PIA19952
Sky brightness and color measurements during the 21 August 2017 total solar eclipse.
Bruns, Donald G; Bruns, Ronald D
2018-06-01
The sky brightness was measured during the partial phases and during totality of the 21 August 2017 total solar eclipse. A tracking CCD camera with color filters and a wide-angle lens allowed measurements across a wide field of view, recording images every 10 s. The partially and totally eclipsed Sun was kept behind an occulting disk attached to the camera, allowing direct brightness measurements from 1.5° to 38° from the Sun. During the partial phases, the sky brightness as a function of time closely followed the integrated intensity of the unobscured fraction of the solar disk. A redder sky was measured close to the Sun just before totality, caused by the redder color of the exposed solar limb. During totality, a bluer sky was measured, dimmer than the normal sky by a factor of 10,000. Suggestions for enhanced measurements at future eclipses are offered.
Non-invasive Self-Care Anemia Detection during Pregnancy Using a Smartphone Camera
NASA Astrophysics Data System (ADS)
Anggraeni, M. D.; Fatoni, A.
2017-02-01
Indonesian maternal mortality rate is the highest in South East Asia. Postpartum hemorrhage is the major causes of maternal mortality in Indonesia. Anemia during pregnancy contributes significantly to postpartum hemorrhage. Early detection of anemia during pregnancy may save mothers from maternal death. This research aim to develop a non-invasive self-care anemia detection based on the palpebral color observation and using a smartphone camera. The color intensity (Red, Green, and Blue) was then measured using a Colorgrab software (Loomatix) and analyzed compared to the hemoglobin concentration of the samples, measured using standard Spectrophotometer method. The result showed that the red color intensity had a high correlation (R2=0.814) with a linear regression of y=14.486x + 50.228. This preliminary study may be used as anemia early detection which more objective compared to visual assessment usually performed.
High-Speed Imaging Optical Pyrometry for Study of Boron Nitride Nanotube Generation
NASA Technical Reports Server (NTRS)
Inman, Jennifer A.; Danehy, Paul M.; Jones, Stephen B.; Lee, Joseph W.
2014-01-01
A high-speed imaging optical pyrometry system is designed for making in-situ measurements of boron temperature during the boron nitride nanotube synthesis process. Spectrometer measurements show molten boron emission to be essentially graybody in nature, lacking spectral emission fine structure over the visible range of the electromagnetic spectrum. Camera calibration experiments are performed and compared with theoretical calculations to quantitatively establish the relationship between observed signal intensity and temperature. The one-color pyrometry technique described herein involves measuring temperature based upon the absolute signal intensity observed through a narrowband spectral filter, while the two-color technique uses the ratio of the signals through two spectrally separated filters. The present study calibrated both the one- and two-color techniques at temperatures between 1,173 K and 1,591 K using a pco.dimax HD CMOS-based camera along with three such filters having transmission peaks near 550 nm, 632.8 nm, and 800 nm.
Noctilucent cloud particle size determination based on multi-wavelength all-sky analysis
NASA Astrophysics Data System (ADS)
Ugolnikov, Oleg S.; Galkin, Alexey A.; Pilgaev, Sergey V.; Roldugin, Alexey V.
2017-10-01
The article deals with the analysis of color distribution in noctilucent clouds (NLC) in the sky based on multi-wavelength (RGB) CCD-photometry provided with the all-sky camera in Lovozero in the north of Russia (68.0°N, 35.1°E) during the bright expanded NLC performance in the night of August 12, 2016. Small changes in the NLC color across the sky are interpreted as the atmospheric absorption and extinction effects combined with the difference in the Mie scattering functions of NLC particles for the three color channels of the camera. The method described in this paper is used to find the effective monodisperse radius of particles about 55 nm. The result of these simple and cost-effective measurements is in good agreement with previous estimations of comparable accuracy. Non-spherical particles, Gaussian and lognormal distribution of the particle size are also considered.
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2011-01-01
In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.
Lunar Reconnaissance Orbiter Camera (LROC) instrument overview
Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.
2010-01-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.
Images of the 10-micron source in the Cygnus 'Egg'
NASA Technical Reports Server (NTRS)
Jaye, D.; Fienberg, R. Tresch; Fazio, G. G.; Gezari, D. Y.; Lamb, G. M.; Shu, P. K.; Hoffmann, W. F.; Mccreight, C. R.
1989-01-01
Mid-IR images of AFGL 2688, the Egg nebula, obtained with a 16 x 16 pixel array camera (field of view 12.5 x 12.5 arcsec) resolve the central source. It appears as a centrally peaked ellipsoid with major axis of symmetry parallel to the axis of the visible nebulosity. This is contrary to the expected extension perpendicular to this axis implied by proposed dust-toroid models of the IR source. Maps of the spatial distribution of 8-13 micron color temperature and warm dust opacity derived from the multiwavelength images further characterize the IR emission. The remarkable flatness of the color temperature conflicts with the radial temperature gradient expected across a thick shell of material with a single heat source at its center. The new data suggest instead that the source consists of a central star surrounded by a dust shell that is too thin to provide a detectable temperature gradient and too small to permit the resolution of limb brightening.
NASA Technical Reports Server (NTRS)
2000-01-01
This single frame from a color movie of Jupiter from NASA's Cassini spacecraft shows what it would look like to unpeel the entire globe of Jupiter, stretch it out on a wall into the form of a rectangular map.The image is a color cylindrical projection of the complete circumference of Jupiter, from 60 degrees south to 60 degrees north. It was produced from six images taken by Cassini's narrow-band camera on Oct. 31, 2000, in each of three filters: red, green and blue.The smallest visible features at the equator are about 600 kilometers (about 370 miles) across. In a map of this type, the most extreme northern and southern latitudes are unnaturally stretched out.Cassini is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.Sensor fusion of range and reflectance data for outdoor scene analysis
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebvert, Martial; Kanade, Takeo
1988-01-01
In recognizing objects in an outdoor scene, range and reflectance (or color) data provide complementary information. Results of experiments in recognizing outdoor scenes containing roads, trees, and cars are presented. The recognition program uses range and reflectance data obtained by a scanning laser range finder, as well as color data from a color TV camera. After segmentation of each image into primitive regions, models of objects are matched using various properties.
View of 'Cape St. Mary' from 'Cape Verde' (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
As part of its investigation of 'Victoria Crater,' NASA's Mars Exploration Rover Opportunity examined a promontory called 'Cape St. Mary' from the from the vantage point of 'Cape Verde,' the next promontory counterclockwise around the crater's deeply scalloped rim. This view of Cape St. Mary combines several exposures taken by the rover's panoramic camera into a false-color mosaic. Contrast has been adjusted to improve the visibility of details in shaded areas. The upper portion of the crater wall contains a jumble of material tossed outward by the impact that excavated the crater. This vertical cross-section through the blanket of ejected material surrounding the crater was exposed by erosion that expanded the crater outward from its original diameter, according to scientists' interpretation of the observations. Below the jumbled material in the upper part of the wall are layers that survive relatively intact from before the crater-causing impact. Near the base of the Cape St. Mary cliff are layers with a pattern called 'crossbedding,' intersecting with each other at angles, rather than parallel to each other. Large-scale crossbedding can result from material being deposited as wind-blown dunes. The images combined into this mosaic were taken during the 970th Martian day, or sol, of Opportunity's Mars-surface mission (Oct. 16, 2006). The panoramic camera took them through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters. The false color enhances subtle color differences among materials in the rocks and soils of the scene.Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
The Effect of Selected Cinemagraphic Elements on Audience Perception of Mediated Concepts.
ERIC Educational Resources Information Center
Orr, Quinn
This study is to explore cinemagraphic and visual elements and their inter-relations through the reinterpretation of previous research and literature. The cinemagraphic elements of visual images (camera angle, camera motion, subject motion, color, and lighting) work as a language requiring a proper grammar for the messages to be conveyed in their…
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
2008-09-01
automated processing of images for color correction, segmentation of foreground targets from sediment and classification of targets to taxonomic category...element in the development of HabCam as a tool for habitat characterization is the automated processing of images for color correction, segmentation of
2017-10-16
Inside the Spectrum prototype unit, organisms in a Petri plate are exposed to different colors of lighting. The device works by exposing organisms to different colors of fluorescent light while a camera records what's happening with time-lapse photography. Results from the Spectrum project will shed light on which living things are best suited for long-duration flights into deep space.
Thin and Slow Smoke Detection by Using Frequency Image
NASA Astrophysics Data System (ADS)
Zheng, Guang; Oe, Shunitiro
In this paper, a new method to detect thin and slow smoke for early fire alarm by using frequency image has been proposed. The correlation coefficient of the frequency image between the current stage and the initial stage are calculated, so are the gray image correlation coefficient of the color image. When the thin smoke close to transparent enters into the camera view, the correlation coefficient of the frequency image becomes small, while the gray image correlation coefficient of the color image hardly change and keep large. When something which is not transparent, like human beings, etc., enters into the camera view, the correlation coefficient of the frequency image becomes small, as well as that of color image. Based on the difference of correlation coefficient between frequency image and color image in different situations, the thin smoke can be detected. Also, considering the movement of the thin smoke, miss detection caused by the illustration change or noise can be avoided. Several experiments in different situations are carried out, and the experimental results show the effect of the proposed method.
Isochrone Fitting of Hubble Photometry in UV-Vis Bands
NASA Astrophysics Data System (ADS)
Barker, Hallie; Paust, Nathaniel
2017-01-01
We present the results of isochrone fitting of color-magnitude diagrams from Hubble Space Telescope Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) photometry of the globular clusters M13 and M80 in five bands from the ultraviolet to near infrared. Fits from both the Dartmouth Stellar Evolution Program (DSEP) and the PAdova and TRieste Stellar Evolution Code (PARSEC) are examined. Ages, extinctions, and distances are found from the isochrone fitting, and metallicities are confirmed. We conduct careful qualitative analysis on the inconsistencies of the fits across all of the color combinations possible with the five observed bands, and find that the (F606W-F814W) color generally produces very good fits, but that there are large discrepancies when the data is fit using colors including UV bands for both models. Finally, we directly compare the two models by performing isochrone-isochrone fitting, and find that the age in PARSEC is on average 1.5 Gyr younger than DSEP for similar-appearing models at the same metallicity, and that the two models become less discrepant at lower metallicities.
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
Single chip camera active pixel sensor
NASA Technical Reports Server (NTRS)
Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)
2003-01-01
A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.
Northern California and San Francisco Bay
NASA Technical Reports Server (NTRS)
2000-01-01
The left image of this pair was acquired by MISR's nadir camera on August 17, 2000 during Terra orbit 3545. Toward the top, and nestled between the Coast Range and the Sierra Nevadas, are the green fields of the Sacramento Valley. The city of Sacramento is the grayish area near the right-hand side of the image. Further south, San Francisco and other cities of the Bay Area are visible.On the right is a zoomed-in view of the area outlined by the yellow polygon. It highlights the southern end of San Francisco Bay, and was acquired by MISR's airborne counterpart, AirMISR, during an engineering check-out flight on August 25, 1997. AirMISR flies aboard a NASA ER-2 high-altitude aircraft and contains a single camera that rotates to different view angles. When this image was acquired, the AirMISR camera was pointed 70 degrees forward of the vertical. Colorful tidal flats are visible in both the AirMISR and MISR imagery.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.For more information: http://www-misr.jpl.nasa.govA View of Lightning from the Space Shuttle Red Sprites and Blue Jets
NASA Technical Reports Server (NTRS)
Vaughan, Otha H., Jr.
1999-01-01
An examination and analysis of video images of lightning captured by the Low Light Level Monochrome TV cameras of the space shuttle, have provided a variety of examples of new forms of lightning-like discharges that appear to move out of the top of very active thunderstorms. These images were obtained during a number of shuttle missions while conducting the Mesoscale Lightning Observational Experiment (MLE). The video images illustrate a variety of filamentary and broad-like discharges to the stratosphere and maybe related to the intense electrical fields that are generated by the thunderstorm, which may somehow play a part in the Earth's global electrical circuit. A typical event is seen as a single or multiple-like filament that can appear to occur at altitudes between 60 to 95 km above the storm top. In addition, another phenomenon not explained at the present time, appears to move out the top of the storm and then proceeds toward the stratosphere at speeds of about lOOkm/sec. These events, much like a jet, reach an altitude of at least 33 km before they begin to spread out into a cone like shape. More observations obtained from ground and aircraft using low light level color TV cameras have confirmed that the sprites are red while the jets are blue in color, hence the name Red Sprites and Blue Jets. Still images and video data will be presented, illustrating these new atmospheric phenomena.
Two-dimensional fruit ripeness estimation using thermal imaging
NASA Astrophysics Data System (ADS)
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2013-06-01
Some green fruits do not change their color from green to yellow when being ripe. As a result, ripeness estimation via color and fluorescent analytical approaches cannot be applied. In this article, we propose and show for the first time how a thermal imaging camera can be used to two-dimensionally classify fruits into different ripeness levels. Our key idea relies on the fact that the mature fruits have higher heat capacity than the immature ones and therefore the change in surface temperature overtime is slower. Our experimental proof of concept using a thermal imaging camera shows a promising result in non-destructively identifying three different ripeness levels of mangoes Mangifera indica L.
Layered Outcrops in Gusev Crater (False Color)
NASA Technical Reports Server (NTRS)
2004-01-01
One of the ways scientists collect mineralogical data about rocks on Mars is to view them through filters that allow only specific wavelengths of light to pass through the lens of the panoramic camera. NASA's Mars Exploration Rover Spirit took this false-color image of the rock nicknamed 'Tetl' at 1:05 p.m. martian time on its 270th martian day, or sol (Oct. 5, 2004) using the panoramic camera's 750-, 530-, and 430-nanometer filters. Darker red hues in the image correspond to greater concentrations of oxidized soil and dust. Bluer hues correspond to portions of rock that are not as heavily coated with soils or are not as highly oxidized.NASA Astrophysics Data System (ADS)
Orton, Glenn S.; Yanamandra-Fisher, P. A.; Parrish, P. D.; Mousis, O.; Pantin, E.; Fuse, T.; Fujiyoshi, T.; Simon-Miller, A.; Morales-Juberias, R.; Tollestrup, E.; Connelley, M.; Trujillo, C.; Hora, J.; Irwin, P.; Fletcher, L.; Hill, D.; Kollmansberger, S.
2006-09-01
White Oval BA, constituted from 3 predecessor vortices (known as Jupiter's "classical" White Ovals) after successive mergers in 1998 and 2000, became second-largest vortex in the atmosphere of Jupiter (and possibly the solar system) at the time of its formation. While it continues in this distinction,it required a name change after a 2005 December through 2006 February transformation which made it appear visually the same color as the Great Red Spot. Our campaign to understand the changes involved examination of the detailed color and wind field using Hubble Space Telescope instrumentation on several orbits in April. The field of temperatures, ammonia distribution and clouds were also examined using the mid-infrared VISIR camera/spectrometer on ESO's 8.2-m Very Large Telescope, the NASA Infrared telescope with the mid-infrared MIRSI instrument and the refurbished near-infrared facility camera NSFCam2. High-resolution images of the Oval were made before the color change with the COMICS mid-infrared facility on the 8.2-m Subaru telescope.We are using these images, togther with images acquired at the IRTF and with the Gemini/North NIRI near-infrared camera between January, 2005, and August, 2006, to characterize the extent to which changes in storm strength (vorticity, postive vertical motion) influenced (i) the depth from which colored cloud particles may have been "dredged up" from depth or (ii) the altitude to which particles may have been lofted and subject to high-energy UV radiation which caused a color change, as alternative explanations for the phenomenon. Clues to this will provide clues to the chemistry of Jupiter's cloud system and its well-known colors in general. The behavior of Oval BA, its interaction with the Great Red Spot in particular,are also being compared with dynamical models run with the EPIC code.
Three Fresh Exposures, Enhanced Color
NASA Technical Reports Server (NTRS)
2004-01-01
This enhanced-color panoramic camera image from the Mars Exploration Rover Opportunity features three holes created by the rock abrasion tool between sols 143 and 148 (June 18 and June 23, 2004) inside 'Endurance Crater.' The enhanced image makes the red colors a little redder and blue colors a little bluer, allowing viewers to see differences too subtle to be seen without the exaggeration. When compared with an approximately true color image, the tailings from the rock abrasion tool and the interior of the abraded holes are more prominent in this view. Being able to discriminate color variations helps scientists determine rocks' compositional differences and texture variations. This image was created using the 753-, 535- and 432-nanometer filters.2016-11-18
This image of Ceres approximates how the dwarf planet's colors would appear to the eye. This view of Ceres, produced by the German Aerospace Center in Berlin, combines images taken during Dawn's first science orbit in 2015 using the framing camera's red, green and blue spectral filters. The color was calculated using a reflectance spectrum, which is based on the way that Ceres reflects different wavelengths of light and the solar wavelengths that illuminate Ceres. http://photojournal.jpl.nasa.gov/catalog/PIA21079
Hubble Captures Celestial Fireworks Within the Large Magellanic Cloud
NASA Technical Reports Server (NTRS)
2000-01-01
This is a color Hubble Space Telescope (HST) heritage image of supernova remnant N49, a neighboring galaxy, that was taken with Hubble's Wide Field Planetary Camera 2. Color filters were used to sample light emitted by sulfur, oxygen, and hydrogen. The color image was superimposed on a black and white image of stars in the same field also taken with Hubble. Resembling a fireworks display, these delicate filaments are actually sheets of debris from a stellar explosion.
History of Hubble Space Telescope (HST)
2000-07-01
This is a color Hubble Space Telescope (HST) heritage image of supernova remnant N49, a neighboring galaxy, that was taken with Hubble's Wide Field Planetary Camera 2. Color filters were used to sample light emitted by sulfur, oxygen, and hydrogen. The color image was superimposed on a black and white image of stars in the same field also taken with Hubble. Resembling a fireworks display, these delicate filaments are actually sheets of debris from a stellar explosion.
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
Park, Young-Jae; Lee, Jin-Moo; Yoo, Seung-Yeon; Park, Young-Bae
2016-04-01
To examine whether color parameters of tongue inspection (TI) using a digital camera was reliable and valid, and to examine which color parameters serve as predictors of symptom patterns in terms of East Asian medicine (EAM). Two hundred female subjects' tongue substances were photographed by a mega-pixel digital camera. Together with the photographs, the subjects were asked to complete Yin deficiency, Phlegm pattern, and Cold-Heat pattern questionnaires. Using three sets of digital imaging software, each digital image was exposure- and white balance-corrected, and finally L* (luminance), a* (red-green balance), and b* (yellow-blue balance) values of the tongues were calculated. To examine intra- and inter-rater reliabilities and criterion validity of the color analysis method, three raters were asked to calculate color parameters for 20 digital image samples. Finally, four hierarchical regression models were formed. Color parameters showed good or excellent reliability (0.627-0.887 for intra-class correlation coefficients) and significant criterion validity (0.523-0.718 for Spearman's correlation). In the hierarchical regression models, age was a significant predictor of Yin deficiency (β = 0.192), and b* value of the tip of the tongue was a determinant predictor of Yin deficiency, Phlegm, and Heat patterns (β = - 0.212, - 0.172, and - 0.163). Luminance (L*) was predictive of Yin deficiency (β = -0.172) and Cold (β = 0.173) pattern. Our results suggest that color analysis of the tongue using the L*a*b* system is reliable and valid, and that color parameters partially serve as symptom pattern predictors in EAM practice.
Optical detection of two-color-fluorophore barcode for nanopore DNA sensing
NASA Astrophysics Data System (ADS)
Zhang, M.; Sychugov, I.; Schmidt, T.; Linnros, J.
2015-06-01
A simple schematic on parallel optical detection of two-fluorophore barcode for single-molecule nanopore sensing is presented. The chosen two fluorophores, ATTO-532 and DY-521-XL, emitting in well-separated spectrum range can be excited at the same wavelength. A beam splitter was employed to separate signals from the two fluorophores and guide them to the same CCD camera. Based on a conventional microscope, sources of background in the nanopore sensing system, including membranes, compounds in buffer solution, and a detection cell was characterized. By photoluminescence excitation measurements, it turned out that silicon membrane has a negligible photoluminescence under the examined excitation from 440 nm to 560 nm, in comparison with a silicon nitrite membrane. Further, background signals from the detection cell were suppressed. Brownian motion of 450 bps DNA labelled with single ATTO-532 or DY-521-XL was successfully recorded by our optical system.
Development of a portable multispectral thermal infrared camera
NASA Technical Reports Server (NTRS)
Osterwisch, Frederick G.
1991-01-01
The purpose of this research and development effort was to design and build a prototype instrument designated the 'Thermal Infrared Multispectral Camera' (TIRC). The Phase 2 effort was a continuation of the Phase 1 feasibility study and preliminary design for such an instrument. The completed instrument designated AA465 has application in the field of geologic remote sensing and exploration. The AA465 Thermal Infrared Camera (TIRC) System is a field-portable multispectral thermal infrared camera operating over the 8.0 - 13.0 micron wavelength range. Its primary function is to acquire two-dimensional thermal infrared images of user-selected scenes. Thermal infrared energy emitted by the scene is collected, dispersed into ten 0.5 micron wide channels, and then measured and recorded by the AA465 System. This multispectral information is presented in real time on a color display to be used by the operator to identify spectral and spatial variations in the scenes emissivity and/or irradiance. This fundamental instrument capability has a wide variety of commercial and research applications. While ideally suited for two-man operation in the field, the AA465 System can be transported and operated effectively by a single user. Functionally, the instrument operates as if it were a single exposure camera. System measurement sensitivity requirements dictate relatively long (several minutes) instrument exposure times. As such, the instrument is not suited for recording time-variant information. The AA465 was fabricated, assembled, tested, and documented during this Phase 2 work period. The detailed design and fabrication of the instrument was performed during the period of June 1989 to July 1990. The software development effort and instrument integration/test extended from July 1990 to February 1991. Software development included an operator interface/menu structure, instrument internal control functions, DSP image processing code, and a display algorithm coding program. The instrument was delivered to NASA in March 1991. Potential commercial and research uses for this instrument are in its primary application as a field geologists exploration tool. Other applications have been suggested but not investigated in depth. These are measurements of process control in commercial materials processing and quality control functions which require information on surface heterogeneity.
ERIC Educational Resources Information Center
Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.
2015-01-01
We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa
2016-03-01
A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.
Fluorescence endoscopic video system
NASA Astrophysics Data System (ADS)
Papayan, G. V.; Kang, Uk
2006-10-01
This paper describes a fluorescence endoscopic video system intended for the diagnosis of diseases of the internal organs. The system operates on the basis of two-channel recording of the video fluxes from a fluorescence channel and a reflected-light channel by means of a high-sensitivity monochrome television camera and a color camera, respectively. Examples are given of the application of the device in gastroenterology.
Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
Temperature-Sensitive Coating Sensor Based on Hematite
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.
2011-01-01
A temperature-sensitive coating, based on hematite (iron III oxide), has been developed to measure surface temperature using spectral techniques. The hematite powder is added to a binder that allows the mixture to be painted on the surface of a test specimen. The coating dynamically changes its relative spectral makeup or color with changes in temperature. The color changes from a reddish-brown appearance at room temperature (25 C) to a black-gray appearance at temperatures around 600 C. The color change is reversible and repeatable with temperature cycling from low to high and back to low temperatures. Detection of the spectral changes can be recorded by different sensors, including spectrometers, photodiodes, and cameras. Using a-priori information obtained through calibration experiments in known thermal environments, the color change can then be calibrated to yield accurate quantitative temperature information. Temperature information can be obtained at a point, or over an entire surface, depending on the type of equipment used for data acquisition. Because this innovation uses spectrophotometry principles of operation, rather than the current methods, which use photoluminescence principles, white light can be used for illumination rather than high-intensity short wavelength excitation. The generation of high-intensity white (or potentially filtered long wavelength light) is much easier, and is used more prevalently for photography and video technologies. In outdoor tests, the Sun can be used for short durations as an illumination source as long as the amplitude remains relatively constant. The reflected light is also much higher in intensity than the emitted light from the inefficient current methods. Having a much brighter surface allows a wider array of detection schemes and devices. Because color change is the principle of operation, the development of high-quality, lower-cost digital cameras can be used for detection, as opposed to the high-cost imagers needed for intensity measurements with the current methods. Alternative methods of detection are possible to increase the measurement sensitivity. For example, a monochrome camera can be used with an appropriate filter and a radiometric measurement of normalized intensity change that is proportional to the change coating temperature. Using different spectral regions yields different sensitivities and calibration curves for converting intensity change to temperature units. Alternatively, using a color camera, a ratio of the standard red, green, and blue outputs can be used as a self-referenced change. The blue region (less than 500 nm) does not change nearly as much as the red region (greater than 575 nm), so a ratio of color intensities will yield a calibrated temperature image. The new temperature sensor coating is easy to apply, is inexpensive, can contour complex shape surfaces, and can be a global surface measurement system based on spectrophotometry. The color change, or relative intensity change, at different colors makes the optical detection under white light illumination, and associated interpretation, much easier to measure and interpret than in the detection systems of the current methods.
AGN Unification at z ~ 1: u - R Colors and Gradients in X-Ray AGN Hosts
NASA Astrophysics Data System (ADS)
Ammons, S. Mark; Rosario, David J. V.; Koo, David C.; Dutton, Aaron A.; Melbourne, Jason; Max, Claire E.; Mozena, Mark; Kocevski, Dale D.; McGrath, Elizabeth J.; Bouwens, Rychard J.; Magee, Daniel K.
2011-10-01
We present uncontaminated rest-frame u - R colors of 78 X-ray-selected active galactic nucleus (AGN) hosts at 0.5 < z < 1.5 in the Chandra Deep Fields measured with Hubble Space Telescope (HST)/Advanced Camera for Surveys/NICMOS and Very Large Telescope/ISAAC imaging. We also present spatially resolved NUV - R color gradients for a subsample of AGN hosts imaged by HST/Wide Field Camera 3 (WFC3). Integrated, uncorrected photometry is not reliable for comparing the mean properties of soft and hard AGN host galaxies at z ~ 1 due to color contamination from point-source AGN emission. We use a cloning simulation to develop a calibration between concentration and this color contamination and use this to correct host galaxy colors. The mean u - R color of the unobscured/soft hosts beyond ~6 kpc is statistically equivalent to that of the obscured/hard hosts (the soft sources are 0.09 ± 0.16 mag bluer). Furthermore, the rest-frame V - J colors of the obscured and unobscured hosts beyond ~6 kpc are statistically equivalent, suggesting that the two populations have similar distributions of dust extinction. For the WFC3/infrared sample, the mean NUV - R color gradients of unobscured and obscured sources differ by less than ~0.5 mag for r > 1.1 kpc. These three observations imply that AGN obscuration is uncorrelated with the star formation rate beyond ~1 kpc. These observations favor a unification scenario for intermediate-luminosity AGNs in which obscuration is determined geometrically. Scenarios in which the majority of intermediate-luminosity AGNs at z ~ 1 are undergoing rapid, galaxy-wide quenching due to AGN-driven feedback processes are disfavored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buie, Marc W.; Young, Eliot F.; Young, Leslie A.
We present new imaging of the surface of Pluto and Charon obtained during 2002-2003 with the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) instrument. Using these data, we construct two-color albedo maps for the surfaces of both Pluto and Charon. Similar mapping techniques are used to re-process HST/Faint Object Camera (FOC) images taken in 1994. The FOC data provide information in the ultraviolet and blue wavelengths that show a marked trend of UV-bright material toward the sunlit pole. The ACS data are taken at two optical wavelengths and show widespread albedo and color variegation on the surface ofmore » Pluto and hint at a latitudinal albedo trend on Charon. The ACS data also provide evidence for a decreasing albedo for Pluto at blue (435 nm) wavelengths, while the green (555 nm) data are consistent with a static surface over the one-year period of data collection. We use the two maps to synthesize a true visual color map of Pluto's surface and investigate trends in color. The mid- to high-latitude region on the sunlit pole is, on average, more neutral in color and generally higher albedo than the rest of the surface. Brighter surfaces also tend to be more neutral in color and show minimal color variations. The darker regions show considerable color diversity arguing that there must be a range of compositional units in the dark regions. Color variations are weak when sorted by longitude. These data are also used to constrain astrometric corrections that enable more accurate orbit fitting, both for the heliocentric orbit of the barycenter and the orbit of Pluto and Charon about their barycenter.« less
AGN UNIFICATION AT z {approx} 1: u - R COLORS AND GRADIENTS IN X-RAY AGN HOSTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark Ammons, S.; Rosario, David J. V.; Koo, David C., E-mail: ammons@as.arizona.edu, E-mail: rosario@ucolick.org, E-mail: koo@ucolick.org
2011-10-10
We present uncontaminated rest-frame u - R colors of 78 X-ray-selected active galactic nucleus (AGN) hosts at 0.5 < z < 1.5 in the Chandra Deep Fields measured with Hubble Space Telescope (HST)/Advanced Camera for Surveys/NICMOS and Very Large Telescope/ISAAC imaging. We also present spatially resolved NUV - R color gradients for a subsample of AGN hosts imaged by HST/Wide Field Camera 3 (WFC3). Integrated, uncorrected photometry is not reliable for comparing the mean properties of soft and hard AGN host galaxies at z {approx} 1 due to color contamination from point-source AGN emission. We use a cloning simulation tomore » develop a calibration between concentration and this color contamination and use this to correct host galaxy colors. The mean u - R color of the unobscured/soft hosts beyond {approx}6 kpc is statistically equivalent to that of the obscured/hard hosts (the soft sources are 0.09 {+-} 0.16 mag bluer). Furthermore, the rest-frame V - J colors of the obscured and unobscured hosts beyond {approx}6 kpc are statistically equivalent, suggesting that the two populations have similar distributions of dust extinction. For the WFC3/infrared sample, the mean NUV - R color gradients of unobscured and obscured sources differ by less than {approx}0.5 mag for r > 1.1 kpc. These three observations imply that AGN obscuration is uncorrelated with the star formation rate beyond {approx}1 kpc. These observations favor a unification scenario for intermediate-luminosity AGNs in which obscuration is determined geometrically. Scenarios in which the majority of intermediate-luminosity AGNs at z {approx} 1 are undergoing rapid, galaxy-wide quenching due to AGN-driven feedback processes are disfavored.« less
Digital methods of recording color television images on film tape
NASA Astrophysics Data System (ADS)
Krivitskaya, R. Y.; Semenov, V. M.
1985-04-01
Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.
Competitive Parallel Processing For Compression Of Data
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Antony R. H.
1990-01-01
Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.
USDA-ARS?s Scientific Manuscript database
The overall objective of this research was to develop an in-field presorting and grading system to separate undersized and defective fruit from fresh market-grade apples. To achieve this goal, a cost-effective machine vision inspection prototype was built, which consisted of a low-cost color camera,...
NASA Astrophysics Data System (ADS)
Dickensheets, David L.; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
Dickensheets, David L; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Spain, Christopher J.; Ho, K. C.; Keller, James M.; Ton, Tuan T.; Wong, David C.; Soumekh, Mehrdad
2010-04-01
Forward-looking ground-penetrating radar (FLGPR) has received a significant amount of attention for use in explosivehazards detection. A drawback to FLGPR is that it results in an excessive number of false detections. This paper presents our analysis of the explosive-hazards detection system tested by the U.S. Army Night Vision and Electronic Sensors Directorate (NVESD). The NVESD system combines an FLGPR with a visible-spectrum color camera. We present a target detection algorithm that uses a locally-adaptive detection scheme with spectrum-based features. The remaining FLGPR detections are then projected into the camera imagery and image-based features are collected. A one-class classifier is then used to reduce the number of false detections. We show that our proposed FLGPR target detection algorithm, coupled with our camera-based false alarm (FA) reduction method, is effective at reducing the number of FAs in test data collected at a US Army test facility.
Low-cost panoramic infrared surveillance system
NASA Astrophysics Data System (ADS)
Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George
2017-05-01
A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.
Mastcam Special Filters Help Locate Variations Ahead
2017-11-01
This pair of images from the Mast Camera (Mastcam) on NASA's Curiosity rover illustrates how special filters are used to scout terrain ahead for variations in the local bedrock. The upper panorama is in the Mastcam's usual full color, for comparison. The lower panorama of the same scene, in false color, combines three exposures taken through different "science filters," each selecting for a narrow band of wavelengths. Filters and image processing steps were selected to make stronger signatures of hematite, an iron-oxide mineral, evident as purple. Hematite is of interest in this area of Mars -- partway up "Vera Rubin Ridge" on lower Mount Sharp -- as holding clues about ancient environmental conditions under which that mineral originated. In this pair of panoramas, the strongest indications of hematite appear related to areas where the bedrock is broken up. With information from this Mastcam reconnaissance, the rover team selected destinations in the scene for close-up investigations to gain understanding about the apparent patchiness in hematite spectral features. The Mastcam's left-eye camera took the component images of both panoramas on Sept. 12, 2017, during the 1,814th Martian day, or sol, of Curiosity's work on Mars. The view spans from south-southeast on the left to south-southwest on the right. The foreground across the bottom of the scene is about 50 feet (about 15 meters) wide. Figure 1 includes scale bars of 1 meter (3.3 feet) in the middle distance and 5 meters (16 feet) at upper right. Curiosity's Mastcam combines two cameras: the right eye with a telephoto lens and the left eye with a wider-angle lens. Each camera has a filter wheel that can be rotated in front of the lens for a choice of eight different filters. One filter for each camera is clear to all visible light, for regular full-color photos, and another is specifically for viewing the Sun. Some of the other filters were selected to admit wavelengths of light that are useful for identifying iron minerals. Each of the filters used for the lower panorama shown here admits light from a narrow band of wavelengths, extending to only about 5 to 10 nanometers longer or shorter than the filter's central wavelength. The three observations combined into this product used filters centered at three near-infrared wavelengths: 751 nanometers, 867 nanometers and 1,012 nanometers. Hematite distinctively absorbs some frequencies of infrared light more than others. Usual color photographs from digital cameras -- such as the upper panorama here from Mastcam -- combine information from red, green and blue filtering. The filters are in a microscopic grid in a "Bayer" filter array situated directly over the detector behind the lens, with wider bands of wavelengths. The colors of the upper panorama, as with most featured images from Mastcam, have been tuned with a color adjustment similar to white balancing for approximating how the rocks and sand would appear under daytime lighting conditions on Earth. https://photojournal.jpl.nasa.gov/catalog/PIA22065
Xiao, Jingjing; Stolkin, Rustam; Gao, Yuqing; Leonardis, Ales
2017-09-06
This paper presents a novel robust method for single target tracking in RGB-D images, and also contributes a substantial new benchmark dataset for evaluating RGB-D trackers. While a target object's color distribution is reasonably motion-invariant, this is not true for the target's depth distribution, which continually varies as the target moves relative to the camera. It is therefore nontrivial to design target models which can fully exploit (potentially very rich) depth information for target tracking. For this reason, much of the previous RGB-D literature relies on color information for tracking, while exploiting depth information only for occlusion reasoning. In contrast, we propose an adaptive range-invariant target depth model, and show how both depth and color information can be fully and adaptively fused during the search for the target in each new RGB-D image. We introduce a new, hierarchical, two-layered target model (comprising local and global models) which uses spatio-temporal consistency constraints to achieve stable and robust on-the-fly target relearning. In the global layer, multiple features, derived from both color and depth data, are adaptively fused to find a candidate target region. In ambiguous frames, where one or more features disagree, this global candidate region is further decomposed into smaller local candidate regions for matching to local-layer models of small target parts. We also note that conventional use of depth data, for occlusion reasoning, can easily trigger false occlusion detections when the target moves rapidly toward the camera. To overcome this problem, we show how combining target information with contextual information enables the target's depth constraint to be relaxed. Our adaptively relaxed depth constraints can robustly accommodate large and rapid target motion in the depth direction, while still enabling the use of depth data for highly accurate reasoning about occlusions. For evaluation, we introduce a new RGB-D benchmark dataset with per-frame annotated attributes and extensive bias analysis. Our tracker is evaluated using two different state-of-the-art methodologies, VOT and object tracking benchmark, and in both cases it significantly outperforms four other state-of-the-art RGB-D trackers from the literature.
Performance measurement of commercial electronic still picture cameras
NASA Astrophysics Data System (ADS)
Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te
1998-06-01
Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.
Imaging experiment: The Viking Lander
Mutch, T.A.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Morris, E.C.; Sagan, C.; Young, A.T.
1972-01-01
The Viking Lander Imaging System will consist of two identical facsimile cameras. Each camera has a high-resolution mode with an instantaneous field of view of 0.04??, and survey and color modes with instantaneous fields of view of 0.12??. Cameras are positioned one meter apart to provide stereoscopic coverage of the near-field. The Imaging Experiment will provide important information about the morphology, composition, and origin of the Martian surface and atmospheric features. In addition, lander pictures will provide supporting information for other experiments in biology, organic chemistry, meteorology, and physical properties. ?? 1972.
Remote sensing of water quality in reservoirs and lakes in semi-arid climates
NASA Technical Reports Server (NTRS)
Anderson, H. M.; Horne, A. J.
1975-01-01
Overlake measurements using aerial cameras (remote sensing) combined with water truth collected from boats most economically provided wide-band photographs rather than precise spectra. With use of false color infrared film (400-950 nm), the reflected spectral signatures seen from hundreds to thousands of meters above the lake merged to produce various color tones. Such colors were easily and inexpensively obtained and could be recognized by lake management personnel without any prior training. The characteristic spectral signatures of various algal types were also recognizable in part by the color tone produced by remote sensing.
Video and thermal imaging system for monitoring interiors of high temperature reaction vessels
Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL
2012-01-10
A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.
Wang, Chenglin; Tang, Yunchao; Zou, Xiangjun; Luo, Lufeng; Chen, Xiong
2017-01-01
Recognition and matching of litchi fruits are critical steps for litchi harvesting robots to successfully grasp litchi. However, due to the randomness of litchi growth, such as clustered growth with uncertain number of fruits and random occlusion by leaves, branches and other fruits, the recognition and matching of the fruit become a challenge. Therefore, this study firstly defined mature litchi fruit as three clustered categories. Then an approach for recognition and matching of clustered mature litchi fruit was developed based on litchi color images acquired by binocular charge-coupled device (CCD) color cameras. The approach mainly included three steps: (1) calibration of binocular color cameras and litchi image acquisition; (2) segmentation of litchi fruits using four kinds of supervised classifiers, and recognition of the pre-defined categories of clustered litchi fruit using a pixel threshold method; and (3) matching the recognized clustered fruit using a geometric center-based matching method. The experimental results showed that the proposed recognition method could be robust against the influences of varying illumination and occlusion conditions, and precisely recognize clustered litchi fruit. In the tested 432 clustered litchi fruits, the highest and lowest average recognition rates were 94.17% and 92.00% under sunny back-lighting and partial occlusion, and sunny front-lighting and non-occlusion conditions, respectively. From 50 pairs of tested images, the highest and lowest matching success rates were 97.37% and 91.96% under sunny back-lighting and non-occlusion, and sunny front-lighting and partial occlusion conditions, respectively. PMID:29112177
[Constructing 3-dimensional colorized digital dental model assisted by digital photography].
Ye, Hong-qiang; Liu, Yu-shu; Liu, Yun-song; Ning, Jing; Zhao, Yi-jiao; Zhou, Yong-sheng
2016-02-18
To explore a method of constructing universal 3-dimensional (3D) colorized digital dental model which can be displayed and edited in common 3D software (such as Geomagic series), in order to improve the visual effect of digital dental model in 3D software. The morphological data of teeth and gingivae were obtained by intra-oral scanning system (3Shape TRIOS), constructing 3D digital dental models. The 3D digital dental models were exported as STL files. Meanwhile, referring to the accredited photography guide of American Academy of Cosmetic Dentistry (AACD), five selected digital photographs of patients'teeth and gingivae were taken by digital single lens reflex camera (DSLR) with the same exposure parameters (except occlusal views) to capture the color data. In Geomagic Studio 2013, after STL file of 3D digital dental model being imported, digital photographs were projected on 3D digital dental model with corresponding position and angle. The junctions of different photos were carefully trimmed to get continuous and natural color transitions. Then the 3D colorized digital dental model was constructed, which was exported as OBJ file or WRP file which was a special file for software of Geomagic series. For the purpose of evaluating the visual effect of the 3D colorized digital model, a rating scale on color simulation effect in views of patients'evaluation was used. Sixteen patients were recruited and their scores on colored and non-colored digital dental models were recorded. The data were analyzed using McNemar-Bowker test in SPSS 20. Universal 3D colorized digital dental model with better color simulation was constructed based on intra-oral scanning and digital photography. For clinical application, the 3D colorized digital dental models, combined with 3D face images, were introduced into 3D smile design of aesthetic rehabilitation, which could improve the patients' cognition for the esthetic digital design and virtual prosthetic effect. Universal 3D colorized digital dental model with better color simulation can be constructed assisted by 3D dental scanning system and digital photography. In clinical practice, the communication between dentist and patients could be improved assisted by the better visual perception since the colorized 3D digital dental models with better color simulation effect.
Layers of 'Cabo Frio' in 'Victoria Crater' (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
This view of 'Victoria crater' is looking southeast from 'Duck Bay' towards the dramatic promontory called 'Cabo Frio.' The small crater in the right foreground, informally known as 'Sputnik,' is about 20 meters (about 65 feet) away from the rover, the tip of the spectacular, layered, Cabo Frio promontory itself is about 200 meters (about 650 feet) away from the rover, and the exposed rock layers are about 15 meters (about 50 feet) tall. This is an enhanced false color rendering of images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.Real-Time View Correction for Mobile Devices.
Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc
2017-11-01
We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.
NASA Technical Reports Server (NTRS)
1987-01-01
Used to detect eye problems in children through analysis of retinal reflexes, the system incorporates image processing techniques. VISISCREEN's photorefractor is basically a 35 millimeter camera with a telephoto lens and an electronic flash. By making a color photograph, the system can test the human eye for refractive error and obstruction in the cornea or lens. Ocular alignment problems are detected by imaging both eyes simultaneously. Electronic flash sends light into the eyes and the light is reflected from the retina back to the camera lens. Photorefractor analyzes the retinal reflexes generated by the subject's response to the flash and produces an image of the subject's eyes in which the pupils are variously colored. The nature of a defect, where such exists, is identifiable by atrained observer's visual examination.
Layers of 'Cape Verde' in 'Victoria Crater' (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
This view of Victoria crater is looking north from 'Duck Bay' towards the dramatic promontory called 'Cape Verde.' The dramatic cliff of layered rocks is about 50 meters (about 165 feet) away from the rover and is about 6 meters (about 20 feet) tall. The taller promontory beyond that is about 100 meters (about 325 feet) away, and the vista beyond that extends away for more than 400 meters (about 1300 feet) into the distance. This is an enhanced false color rendering of images taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity during the rover's 952nd sol, or Martian day, (Sept. 28, 2006) using the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.Skylab investigation of the upwelling off the Northwest coast of Africa
NASA Technical Reports Server (NTRS)
Szekielda, K. H.; Suszkowski, D. J.; Tabor, P. S.
1975-01-01
The upwelling off the NW coast of Africa in the vicinity of Cape Blanc was studied in February - March 1974 from aircraft and in September 1973 from Skylab. The aircraft study was designed to determine the effectiveness of a differential radiometer in quantifying surface chlorophyll concentrations. Photographic images of the S190A Multispectral Camera and the S190B Earth Terrain Camera from Skylab were used to study distributional patterns of suspended material and to locate ocean color boundaries. The thermal channel of the S192 Multispectral Scanner was used to map sea-surface temperature distributions offshore of Cape Blanc. Correlating ocean color changes with temperature gradients is an effective method of qualitatively estimating biological productivity in the upwelling region off Africa.
In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation.
Xia, Chunlei; Wang, Longtan; Chung, Bu-Keun; Lee, Jang-Myung
2015-08-19
In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982