Spectral colors capture and reproduction based on digital camera
NASA Astrophysics Data System (ADS)
Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang
2018-01-01
The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.
Development of digital shade guides for color assessment using a digital camera with ring flashes.
Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan
2011-02-01
Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Digital Earth Watch: Investigating the World with Digital Cameras
NASA Astrophysics Data System (ADS)
Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.
2015-12-01
Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
NASA Astrophysics Data System (ADS)
Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé
2017-01-01
The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.
Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.
2014-01-01
Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
Characterization of a digital camera as an absolute tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual
2003-01-01
An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.
Improving the color fidelity of cameras for advanced television systems
NASA Astrophysics Data System (ADS)
Kollarits, Richard V.; Gibbon, David C.
1992-08-01
In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.
Seo, Soo Hong; Kim, Jae Hwan; Kim, Ji Woong; Kye, Young Chul; Ahn, Hyo Hyun
2011-02-01
Digital photography can be used to measure skin color colorimetrically when combined with proper techniques. To better understand the settings of digital photography for the evaluation and measurement of skin colors, we used a tungsten lamp with filters and the custom white balance (WB) function of a digital camera. All colored squares on a color chart were photographed with each original and filtered light, analyzed into CIELAB coordinates to produce the calibration method for each given light setting, and compared statistically with reference coordinates obtained using a reflectance spectrophotometer. They were summarized as to the typical color groups, such as skin colors. We compared these results according to the fixed vs. custom WB of a digital camera. The accuracy of color measurement was improved when using light with a proper color temperature conversion filter. The skin colors from color charts could be measured more accurately using a fixed WB. In vivo measurement of skin color was easy and possible with our method and settings. The color temperature conversion filter that produced daylight-like light from the tungsten lamp was the best choice when combined with fixed WB for the measurement of colors and acceptable photographs. © 2010 John Wiley & Sons A/S.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Low-cost camera modifications and methodologies for very-high-resolution digital images
USDA-ARS?s Scientific Manuscript database
Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...
Use of a color CMOS camera as a colorimeter
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Redford, Gary R.
2006-08-01
In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.
Evaluation of Digital Camera Technology For Bridge Inspection
DOT National Transportation Integrated Search
1997-07-18
As part of a cooperative agreement between the Tennessee Department of Transportation and the Federal Highway Administration, a study was conducted to evaluate current levels of digital camera and color printing technology with regard to their applic...
NASA Astrophysics Data System (ADS)
Martínez-González, A.; Moreno-Hernández, D.; Monzón-Hernández, D.; León-Rodríguez, M.
2017-06-01
In the schlieren method, the deflection of light by the presence of an inhomogeneous medium is proportional to the gradient of its refractive index. Such deflection, in a schlieren system, is represented by light intensity variations on the observation plane. Then, for a digital camera, the intensity level registered by each pixel depends mainly on the variation of the medium refractive index and the status of the digital camera settings. Therefore, in this study, we regulate the intensity value of each pixel by controlling the camera settings such as exposure time, gamma and gain values in order to calibrate the image obtained to the actual temperature values of a particular medium. In our approach, we use a color digital camera. The images obtained with a color digital camera can be separated on three different color-channels. Each channel corresponds to red, green, and blue color, moreover, each one has its own sensitivity. The differences in sensitivity allow us to obtain a range of temperature values for each color channel. Thus, high, medium and low sensitivity correspond to green, blue, and red color channel respectively. Therefore, by adding up the temperature contribution of each color channel we obtain a wide range of temperature values. Hence, the basic idea in our approach to measure temperature, using a schlieren system, is to relate the intensity level of each pixel in a schlieren image to the corresponding knife-edge position measured at the exit focal plane of the system. Our approach was applied to the measurement of instantaneous temperature fields of the air convection caused by a heated rectangular metal plate and a candle flame. We found that for the metal plate temperature measurements only the green and blue color-channels were required to sense the entire phenomena. On the other hand, for the candle case, the three color-channels were needed to obtain a complete measurement of temperature. In our study, the candle temperature was took as reference and it was found that the maximum temperature value obtained for green, blue and red color-channel was ∼275.6, ∼412.9, and ∼501.3 °C, respectively.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Selecting a digital camera for telemedicine.
Patricoski, Chris; Ferguson, A Stewart
2009-06-01
The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp
Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signalsmore » for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors’ method based on the use of a commercially available color camera is useful to evaluate and understand the display performances of both monochrome and color LCDs in radiology departments.« less
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
Low-complexity camera digital signal imaging for video document projection system
NASA Astrophysics Data System (ADS)
Hsia, Shih-Chang; Tsai, Po-Shien
2011-04-01
We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.
Process simulation in digital camera system
NASA Astrophysics Data System (ADS)
Toadere, Florin
2012-06-01
The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.
Express Yourself: Using Color Schemes, Cameras, and Computers
ERIC Educational Resources Information Center
Lott, Debra
2005-01-01
Self-portraiture is a great project to introduce the study of color schemes and Expressionism. Through this drawing project, students learn about identity, digital cameras, and creative art software. The lesson can be introduced with a study of Edvard Munch and Expressionism. Expressionism was an art movement in which the intensity of the artist's…
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
NASA Astrophysics Data System (ADS)
Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi
2006-02-01
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.
Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products
NASA Astrophysics Data System (ADS)
Williams, Don; Burns, Peter D.
2007-01-01
There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Park, Young-Jae; Lee, Jin-Moo; Yoo, Seung-Yeon; Park, Young-Bae
2016-04-01
To examine whether color parameters of tongue inspection (TI) using a digital camera was reliable and valid, and to examine which color parameters serve as predictors of symptom patterns in terms of East Asian medicine (EAM). Two hundred female subjects' tongue substances were photographed by a mega-pixel digital camera. Together with the photographs, the subjects were asked to complete Yin deficiency, Phlegm pattern, and Cold-Heat pattern questionnaires. Using three sets of digital imaging software, each digital image was exposure- and white balance-corrected, and finally L* (luminance), a* (red-green balance), and b* (yellow-blue balance) values of the tongues were calculated. To examine intra- and inter-rater reliabilities and criterion validity of the color analysis method, three raters were asked to calculate color parameters for 20 digital image samples. Finally, four hierarchical regression models were formed. Color parameters showed good or excellent reliability (0.627-0.887 for intra-class correlation coefficients) and significant criterion validity (0.523-0.718 for Spearman's correlation). In the hierarchical regression models, age was a significant predictor of Yin deficiency (β = 0.192), and b* value of the tip of the tongue was a determinant predictor of Yin deficiency, Phlegm, and Heat patterns (β = - 0.212, - 0.172, and - 0.163). Luminance (L*) was predictive of Yin deficiency (β = -0.172) and Cold (β = 0.173) pattern. Our results suggest that color analysis of the tongue using the L*a*b* system is reliable and valid, and that color parameters partially serve as symptom pattern predictors in EAM practice.
Cost-effective poster and print production with digital camera and computer technology.
Chen, M Y; Ott, D J; Rohde, R P; Henson, E; Gelfand, D W; Boehme, J M
1997-10-01
The purpose of this report is to describe a cost-effective method for producing black-and-white prints and color posters within a radiology department. Using a high-resolution digital camera, personal computer, and color printer, the average cost of a 5 x 7 inch (12.5 x 17.5 cm) black-and-white print may be reduced from $8.50 to $1 each in our institution. The average cost for a color print (8.5 x 14 inch [21.3 x 35 cm]) varies from $2 to $3 per sheet depending on the selection of ribbons for a color-capable laser printer and the paper used. For a 30-panel, 4 x 8 foot (1.2 x 2.4 m) standard-sized poster, the cost for materials and construction is approximately $100.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa
2016-03-01
A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.
NASA Astrophysics Data System (ADS)
Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute
1998-04-01
Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.
Color film spectral properties test experiment for target simulation
NASA Astrophysics Data System (ADS)
Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji
2017-04-01
In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.
Digital methods of recording color television images on film tape
NASA Astrophysics Data System (ADS)
Krivitskaya, R. Y.; Semenov, V. M.
1985-04-01
Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.
Sedgewick, Gerald J.; Ericson, Marna
2015-01-01
Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568
Resolution for color photography
NASA Astrophysics Data System (ADS)
Hubel, Paul M.; Bautsch, Markus
2006-02-01
Although it is well known that luminance resolution is most important, the ability to accurately render colored details, color textures, and colored fabrics cannot be overlooked. This includes the ability to accurately render single-pixel color details as well as avoiding color aliasing. All consumer digital cameras on the market today record in color and the scenes people are photographing are usually color. Yet almost all resolution measurements made on color cameras are done using a black and white target. In this paper we present several methods for measuring and quantifying color resolution. The first method, detailed in a previous publication, uses a slanted-edge target of two colored surfaces in place of the standard black and white edge pattern. The second method employs the standard black and white targets recommended in the ISO standard, but records these onto the camera through colored filters thus giving modulation between black and one particular color component; red, green, and blue color separation filters are used in this study. The third method, conducted at Stiftung Warentest, an independent consumer organization of Germany, uses a whitelight interferometer to generate fringe pattern targets of varying color and spatial frequency.
Selecting the right digital camera for telemedicine-choice for 2009.
Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret
2010-03-01
Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.
Full color natural light holographic camera.
Kim, Myung K
2013-04-22
Full-color, three-dimensional images of objects under incoherent illumination are obtained by a digital holography technique. Based on self-interference of two beam-split copies of the object's optical field with differential curvatures, the apparatus consists of a beam-splitter, a few mirrors and lenses, a piezo-actuator, and a color camera. No lasers or other special illuminations are used for recording or reconstruction. Color holographic images of daylight-illuminated outdoor scenes and a halogen lamp-illuminated toy figure are obtained. From a recorded hologram, images can be calculated, or numerically focused, at any distances for viewing.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Serial-to-parallel color-TV converter
NASA Technical Reports Server (NTRS)
Doak, T. W.; Merwin, R. B.; Zuckswert, S. E.; Sepper, W.
1976-01-01
Solid analog-to-digital converter eliminates flicker and problems with time base stability and gain variation in sequential color TV cameras. Device includes 3-bit delta modulator; two-field memory; timing, switching, and sync network; and three 3-bit delta demodulators
High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations
NASA Astrophysics Data System (ADS)
Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas
2007-10-01
A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
NASA Astrophysics Data System (ADS)
Gomes, Gary G.
1986-05-01
A cost effective and supportable color visual system has been developed to provide the necessary visual cues to United States Air Force B-52 bomber pilots training to become proficient at the task of inflight refueling. This camera model visual system approach is not suitable for all simulation applications, but provides a cost effective alternative to digital image generation systems when high fidelity of a single movable object is required. The system consists of a three axis gimballed KC-l35 tanker model, a range carriage mounted color augmented monochrome television camera, interface electronics, a color light valve projector and an infinity optics display system.
USDA-ARS?s Scientific Manuscript database
Photography has been a welcome tool in assisting to document and convey qualitative soil information. Greater availability of digital cameras with increased information storage capabilities has promoted novel uses of this technology in investigations of water movement patterns, organic matter conte...
[Constructing 3-dimensional colorized digital dental model assisted by digital photography].
Ye, Hong-qiang; Liu, Yu-shu; Liu, Yun-song; Ning, Jing; Zhao, Yi-jiao; Zhou, Yong-sheng
2016-02-18
To explore a method of constructing universal 3-dimensional (3D) colorized digital dental model which can be displayed and edited in common 3D software (such as Geomagic series), in order to improve the visual effect of digital dental model in 3D software. The morphological data of teeth and gingivae were obtained by intra-oral scanning system (3Shape TRIOS), constructing 3D digital dental models. The 3D digital dental models were exported as STL files. Meanwhile, referring to the accredited photography guide of American Academy of Cosmetic Dentistry (AACD), five selected digital photographs of patients'teeth and gingivae were taken by digital single lens reflex camera (DSLR) with the same exposure parameters (except occlusal views) to capture the color data. In Geomagic Studio 2013, after STL file of 3D digital dental model being imported, digital photographs were projected on 3D digital dental model with corresponding position and angle. The junctions of different photos were carefully trimmed to get continuous and natural color transitions. Then the 3D colorized digital dental model was constructed, which was exported as OBJ file or WRP file which was a special file for software of Geomagic series. For the purpose of evaluating the visual effect of the 3D colorized digital model, a rating scale on color simulation effect in views of patients'evaluation was used. Sixteen patients were recruited and their scores on colored and non-colored digital dental models were recorded. The data were analyzed using McNemar-Bowker test in SPSS 20. Universal 3D colorized digital dental model with better color simulation was constructed based on intra-oral scanning and digital photography. For clinical application, the 3D colorized digital dental models, combined with 3D face images, were introduced into 3D smile design of aesthetic rehabilitation, which could improve the patients' cognition for the esthetic digital design and virtual prosthetic effect. Universal 3D colorized digital dental model with better color simulation can be constructed assisted by 3D dental scanning system and digital photography. In clinical practice, the communication between dentist and patients could be improved assisted by the better visual perception since the colorized 3D digital dental models with better color simulation effect.
Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Phillips, E.; Messina, P.
2003-12-01
Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.
Leeuw, Thomas; Boss, Emmanuel
2018-01-16
HydroColor is a mobile application that utilizes a smartphone's camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone's digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor's reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data.
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
New concept high-speed and high-resolution color scanner
NASA Astrophysics Data System (ADS)
Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya
2003-05-01
We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Yamanel, Kivanc; Caglar, Alper; Özcan, Mutlu; Gulsah, Kamran; Bagis, Bora
2010-12-01
This study evaluated the color parameters of resin composite shade guides determined using a colorimeter and digital imaging method. Four composite shade guides, namely: two nanohybrid (Grandio [Voco GmbH, Cuxhaven, Germany]; Premise [KerrHawe SA, Bioggio, Switzerland]) and two hybrid (Charisma [Heraeus Kulzer, GmbH & Co. KG, Hanau, Germany]; Filtek Z250 [3M ESPE, Seefeld, Germany]) were evaluated. Ten shade tabs were selected (A1, A2, A3, A3,5, A4, B1, B2, B3, C2, C3) from each shade guide. CIE Lab values were obtained using digital imaging and a colorimeter (ShadeEye NCC Dental Chroma Meter, Shofu Inc., Kyoto, Japan). The data were analyzed using two-way analysis of variance and Bonferroni post hoc test. Overall, the mean ΔE values from different composite pairs demonstrated statistically significant differences when evaluated with the colorimeter (p < 0.001) but there was no significant difference with the digital imaging method (p = 0.099). With both measurement methods in total, 80% of the shade guide pairs from different composites (97/120) showed color differences greater than 3.7 (moderately perceptible mismatch), and 49% (59/120) had obvious mismatch (ΔE > 6.8). For all shade pairs evaluated, the most significant shade mismatches were obtained between Grandio-Filtek Z250 (p = 0.021) and Filtek Z250-Premise (p = 0.01) regarding ΔE mean values, whereas the best shade match was between Grandio-Charisma (p = 0.255) regardless of the measurement method. The best color match (mean ΔE values) was recorded for A1, A2, and A3 shade pairs in both methods. When proper object-camera distance, digital camera settings, and suitable illumination conditions are provided, digital imaging method could be used in the assessment of color parameters. Interchanging use of shade guides from different composite systems should be avoided during color selection. © 2010, COPYRIGHT THE AUTHORS. JOURNAL COMPILATION © 2010, WILEY PERIODICALS, INC.
Preparing images for publication: part 2.
Bengel, Wolfgang; Devigus, Alessandro
2006-08-01
The transition from conventional to digital photography presents many advantages for authors and photographers in the field of dentistry, but also many complexities and potential problems. No uniform procedures for authors and publishers exist at present for producing high-quality dental photographs. This two-part article aims to provide guidelines for preparing images for publication and improving communication between these two parties. Part 1 provided information about basic color principles, factors that can affect color perception, and digital color management. Part 2 describes the camera setup, discusses how to take a photograph suitable for publication, and outlines steps for the image editing process.
Leeuw, Thomas; Boss, Emmanuel
2018-01-01
HydroColor is a mobile application that utilizes a smartphone’s camera and auxiliary sensors to measure the remote sensing reflectance of natural water bodies. HydroColor uses the smartphone’s digital camera as a three-band radiometer. Users are directed by the application to collect a series of three images. These images are used to calculate the remote sensing reflectance in the red, green, and blue broad wavelength bands. As with satellite measurements, the reflectance can be inverted to estimate the concentration of absorbing and scattering substances in the water, which are predominately composed of suspended sediment, chlorophyll, and dissolved organic matter. This publication describes the measurement method and investigates the precision of HydroColor’s reflectance and turbidity estimates compared to commercial instruments. It is shown that HydroColor can measure the remote sensing reflectance to within 26% of a precision radiometer and turbidity within 24% of a portable turbidimeter. HydroColor distinguishes itself from other water quality camera methods in that its operation is based on radiometric measurements instead of image color. HydroColor is one of the few mobile applications to use a smartphone as a completely objective sensor, as opposed to subjective user observations or color matching using the human eye. This makes HydroColor a powerful tool for crowdsourcing of aquatic optical data. PMID:29337917
Advanced High-Definition Video Cameras
NASA Technical Reports Server (NTRS)
Glenn, William
2007-01-01
A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.
Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C
2015-08-01
Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.
Estimation of melanin content in iris of human eye: prognosis for glaucoma diagnostics
NASA Astrophysics Data System (ADS)
Bashkatov, Alexey N.; Koblova, Ekaterina V.; Genina, Elina A.; Kamenskikh, Tatyana G.; Dolotov, Leonid E.; Sinichkin, Yury P.; Tuchin, Valery V.
2007-02-01
Based on the experimental data obtained in vivo from digital analysis of color images of human irises, the mean melanin content in human eye irises has been estimated. For registration of the color images a digital camera Olympus C-5060 has been used. The images have been obtained from irises of healthy volunteers as well as from irises of patients with open-angle glaucoma. The computer program has been developed for digital analysis of the images. The result has been useful for development of novel and optimization of already existing methods of non-invasive glaucoma diagnostics.
It's not the pixel count, you fool
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2012-01-01
The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.
NASA Astrophysics Data System (ADS)
Quan, Shuxue
2009-02-01
Bayer patterns, in which a single value of red, green or blue is available for each pixel, are widely used in digital color cameras. The reconstruction of the full color image is often referred to as demosaicking. This paper introduced a new approach - morphological demosaicking. The approach is based on strong edge directionality selection and interpolation, followed by morphological operations to refine edge directionality selection and reduce color aliasing. Finally performance evaluation and examples of color artifacts reduction are shown.
[Nitrogen status diagnosis of rice by using a digital camera].
Jia, Liang-Liang; Fan, Ming-Sheng; Zhang, Fu-Suo; Chen, Xin-Ping; Lü, Shi-Hua; Sun, Yan-Ming
2009-08-01
In the present research, a field experiment with different N application rate was conducted to study the possibility of using visible band color analysis methods to monitor the N status of rice canopy. The Correlations of visible spectrum band color intensity between rice canopy image acquired from a digital camera and conventional nitrogen status diagnosis parameters of leaf SPAD chlorophyll meter readings, total N content, upland biomass and N uptake were studied. The results showed that the red color intensity (R), green color intensity (G) and normalized redness intensity (NRI) have significant inverse linear correlations with the conventional N diagnosis parameters of SPAD readings, total N content, upland biomass and total N uptake. The correlation coefficient values (r) were from -0.561 to -0.714 for red band (R), from -0.452 to -0.505 for green band (G), and from -0.541 to 0.817 for normalized redness intensity (NRI). But the normalized greenness intensity (NGI) showed a significant positive correlation with conventional N parameters and the correlation coefficient values (r) were from 0.505 to 0.559. Compared with SPAD readings, the normalized redness intensity (NRI), with a high r value of 0.541-0.780 with conventional N parameters, could better express the N status of rice. The digital image color analysis method showed the potential of being used in rice N status diagnosis in the future.
Accurate color images: from expensive luxury to essential resource
NASA Astrophysics Data System (ADS)
Saunders, David R.; Cupitt, John
2002-06-01
Over ten years ago the National Gallery in London began a program to make digital images of paintings in the collection using a colorimetric imaging system. This was to provide a permanent record of the state of paintings against which future images could be compared to determine if any changes had occurred. It quickly became apparent that such images could be used not only for scientific purposes, but also in applications where transparencies were then being used, for example as source materials for printed books and catalogues or for computer-based information systems. During the 1990s we were involved in the development of a series of digital cameras that have combined the high color accuracy of the original 'scientific' imaging system with the familiarity and portability of a medium format camera. This has culminated in the program of digitization now in progress at the National Gallery. By the middle of 2001 we will have digitized all the major paintings in the collection at a resolution of 10,000 pixels along their longest dimension and with calibrated color; we are on target to digitize the whole collection by the end of 2002. The images are available on-line within the museum for consultation and so that Gallery departments can use the images in printed publications and on the Gallery's web- site. We describe the development of the imaging systems used at National Gallery and how the research we have conducted into high-resolution accurate color imaging has developed from being a peripheral, if harmless, research activity to becoming a central part of the Gallery's information and publication strategy. Finally, we discuss some outstanding issues, such as interfacing our color management procedures with the systems used by external organizations.
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
How Phoenix Creates Color Images (Animation)
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] Click on image for animation This simple animation shows how a color image is made from images taken by Phoenix. The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists. By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Lights, Camera, Spectroscope! The Basics of Spectroscopy Disclosed Using a Computer Screen
ERIC Educational Resources Information Center
Garrido-Gonza´lez, Jose´ J.; Trillo-Alcala´, María; Sa´nchez-Arroyo, Antonio J.
2018-01-01
The generation of secondary colors in digital devices by means of the additive red, green, and blue color model (RGB) can be a valuable way to introduce students to the basics of spectroscopy. This work has been focused on the spectral separation of secondary colors of light emitted by a computer screen into red, green, and blue bands, and how the…
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
Study on color difference estimation method of medicine biochemical analysis
NASA Astrophysics Data System (ADS)
Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun
2006-01-01
The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
A DVD Spectroscope: A Simple, High-Resolution Classroom Spectroscope
ERIC Educational Resources Information Center
Wakabayashi, Fumitaka; Hamada, Kiyohito
2006-01-01
Digital versatile disks (DVDs) have successfully made up an inexpensive but high-resolution spectroscope suitable for classroom experiments that can easily be made with common material and gives clear and fine spectra of various light sources and colored material. The observed spectra can be photographed with a digital camera, and such images can…
e-phenology: monitoring leaf phenology and tracking climate changes in the tropics
NASA Astrophysics Data System (ADS)
Morellato, Patrícia; Alberton, Bruna; Almeida, Jurandy; Alex, Jefersson; Mariano, Greice; Torres, Ricardo
2014-05-01
The e-phenology is a multidisciplinary project combining research in Computer Science and Phenology. Its goal is to attack theoretical and practical problems involving the use of new technologies for remote phenological observation aiming to detect local environmental changes. It is geared towards three objectives: (a) the use of new technologies of environmental monitoring based on remote phenology monitoring systems; (b) creation of a protocol for a Brazilian long term phenology monitoring program and for the integration across disciplines, advancing our knowledge of seasonal responses within tropics to climate change; and (c) provide models, methods and algorithms to support management, integration and analysis of data of remote phenology systems. The research team is composed by computer scientists and biology researchers in Phenology. Our first results include: Phenology towers - We set up the first phenology tower in our core cerrado-savanna 1 study site at Itirapina, São Paulo, Brazil. The tower received a complete climatic station and a digital camera. The digital camera is set up to take daily sequence of images (five images per hour, from 6:00 to 18:00 h). We set up similar phenology towers with climatic station and cameras in five more sites: cerrado-savanna 2 (Pé de Gigante, SP), cerrado grassland 3 (Itirapina, SP), rupestrian fields 4 ( Serra do Cipo, MG), seasonal forest 5 (Angatuba, SP) and Atlantic raiforest 6 (Santa Virginia, SP). Phenology database - We finished modeling and validation of a phenology database that stores ground phenology and near-remote phenology, and we are carrying out the implementation with data ingestion. Remote phenology and image processing - We performed the first analyses of the cerrado sites 1 to 4 phenology derived from digital images. Analysis were conducted by extracting color information (RGB Red, Green and Blue color channels) from selected parts of the image named regions of interest (ROI). using the green color channel. We analyzed a daily sequence of images (6:00 to 18:00 h). Our results are innovative and indicate the great variation in color change response for tropical trees. We validate the camera phenology with our on the ground direct observation in the core cerrado site 1. We are developing a Image processing software to authomatic process the digital images and to generate the time series for further analyses. New techniques and image features have been used to extract seasonal features from data and for data processing, such as machine learning and visual rhythms. Machine learning was successful applied to identify similar species within the image. Visual rhythms show up as a new analytic tool for phenological interpretation. Next research steps include the analyses of longer data series, correlation with local climatic data, analyses and comparison of patterns among different vegetation sites, prepare a compressive protocol for digital camera phenology and develop new technologies to access vegetation changes using digital cameras. Support: FAPESP-Micorsoft Research, CNPq, CAPES.
NASA Technical Reports Server (NTRS)
Habibi, A.; Batson, B.
1976-01-01
Space Shuttle will be using a field-sequential color television system for the first few missions, but the present plans are to switch to a NTSC color TV system for future missions. The field-sequential color TV system uses a modified black and white camera, producing a TV signal with a digital bandwidth of about 60 Mbps. This article discusses the characteristics of the Shuttle TV systems and proposes a bandwidth-compression technique for the field-sequential color TV system that could operate at 13 Mbps to produce a high-fidelity signal. The proposed bandwidth-compression technique is based on a two-dimensional DPCM system that utilizes temporal, spectral, and spatial correlation inherent in the field-sequential color TV imagery. The proposed system requires about 60 watts and less than 200 integrated circuits.
Preparing images for publication: part 1.
Devigus, Alessandro; Paul, Stefan
2006-04-01
Images play a vital role in the publication and presentation of clinical and scientific work. Within clinical photography, color reproduction has always been a contentious issue. With the development of new technologies, the variables affecting color reproduction have changed, and photographers have moved away from film-based to digital photographic imaging systems. To develop an understanding of color, knowledge about the basic principles of light and vision is important. An object's color is determined by which wavelengths of light it reflects. Colors of light and colors of pigment behave differently. Due to technical limitations, monitors and printers are unable to reproduce all the colors we can see with our eyes, also called the LAB color space. In order to optimize the output of digital clinical images, color management solutions need to be integrated in the photographic workflow; however, their use is still limited in the medical field. As described in part 2 of this article, calibrating your computer monitor and using an 18% gray background card are easy ways to enable more consistent color reproduction for publication. In addition, some basic information about the various camera settings is given to facilitate the use of this new digital equipment in daily practice.
NASA Astrophysics Data System (ADS)
Takeuchi, Eric B.; Flint, Graham W.; Bergstedt, Robert; Solone, Paul J.; Lee, Dicky; Moulton, Peter F.
2001-03-01
Electronic cinema projectors are being developed that use a digital micromirror device (DMDTM) to produce the image. Photera Technologies has developed a new architecture that produces truly digital imagery using discrete pulse trains of red, green, and blue light in combination with a DMDTM where in the number of pulses that are delivered to the screen during a given frame can be defined in a purely digital fashion. To achieve this, a pulsed RGB laser technology pioneered by Q-Peak is combined with a novel projection architecture that we refer to as Laser Digital CameraTM. This architecture provides imagery wherein, during the time interval of each frame, individual pixels on the screen receive between zero and 255 discrete pulses of each color; a circumstance which yields 24-bit color. Greater color depth, or increased frame rate is achievable by increasing the pulse rate of the laser. Additionally, in the context of multi-screen theaters, a similar architecture permits our synchronously pulsed RGB source to simultaneously power three screens in a color sequential manner; thereby providing an efficient use of photons, together with the simplifications which derive from using a single DMDTM chip in each projector.
Thin-filament pyrometry with a digital still camera.
Maun, Jignesh D; Sunderland, Peter B; Urban, David L
2007-02-01
A novel thin-filament pyrometer is presented. It involves a consumer-grade color digital still camera with 6 megapixels and 12 bits per color plane. SiC fibers were used and scanning-electron microscopy found them to be uniform with diameters of 13.9 micro m. Measurements were performed in a methane-air coflowing laminar jet diffusion flame with a luminosity length of 72 mm. Calibration of the pyrometer was accomplished with B-type thermocouples. The pyrometry measurements yielded gas temperatures in the range of 1400-2200 K with an estimated uncertainty of +/-60 K, a relative temperature resolution of +/-0.215 K, a spatial resolution of 42 mum, and a temporal resolution of 0.66 ms. Fiber aging for 10 min had no effect on the results. Soot deposition was less problematic for the pyrometer than for the thermocouple.
High-speed line-scan camera with digital time delay integration
NASA Astrophysics Data System (ADS)
Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.
A digital ISO expansion technique for digital cameras
NASA Astrophysics Data System (ADS)
Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong
2010-01-01
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
NASA Astrophysics Data System (ADS)
Ruggeri, Marco; Hernandez, Victor; De Freitas, Carolina; Relhan, Nidhi; Silgado, Juan; Manns, Fabrice; Parel, Jean-Marie
2016-03-01
Hand-held wide-field contact color fundus photography is currently the standard method to acquire diagnostic images of children during examination under anesthesia and in the neonatal intensive care unit. The recent development of portable non-contact hand-held OCT retinal imaging systems has proved that OCT is of tremendous help to complement fundus photography in the management of pediatric patients. Currently, there is no commercial or research system that combines color wide-field digital fundus and OCT imaging in a contact-fashion. The contact of the probe with the cornea has the advantages of reducing motion experienced by the photographer during the imaging and providing fundus and OCT images with wider field of view that includes the periphery of the retina. In this study we produce proof of concept for a contact-type hand-held unit for simultaneous color fundus and OCT live view of the retina of pediatric patients. The front piece of the hand-held unit consists of a contact ophthalmoscopy lens integrating a circular light guide that was recovered from a digital fundus camera for pediatric imaging. The custom-made rear piece consists of the optics to: 1) fold the visible aerial image of the fundus generated by the ophthalmoscopy lens on a miniaturized level board digital color camera; 2) conjugate the eye pupil to the galvanometric scanning mirrors of an OCT delivery system. Wide-field color fundus and OCT images were simultaneously obtained in an eye model and sequentially obtained on the eye of a conscious 25 year-old human subject with healthy retina.
Introduction to Color Imaging Science
NASA Astrophysics Data System (ADS)
Lee, Hsien-Che
2005-04-01
Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.
BOREAS Level-0 C-130 Aerial Photography
NASA Technical Reports Server (NTRS)
Newcomer, Jeffrey A.; Dominguez, Roseanne; Hall, Forrest G. (Editor)
2000-01-01
For BOReal Ecosystem-Atmosphere Study (BOREAS), C-130 and other aerial photography was collected to provide finely detailed and spatially extensive documentation of the condition of the primary study sites. The NASA C-130 Earth Resources aircraft can accommodate two mapping cameras during flight, each of which can be fitted with 6- or 12-inch focal-length lenses and black-and-white, natural-color, or color-IR film, depending upon requirements. Both cameras were often in operation simultaneously, although sometimes only the lower resolution camera was deployed. When both cameras were in operation, the higher resolution camera was often used in a more limited fashion. The acquired photography covers the period of April to September 1994. The aerial photography was delivered as rolls of large format (9 x 9 inch) color transparency prints, with imagery from multiple missions (hundreds of prints) often contained within a single roll. A total of 1533 frames were collected from the C-130 platform for BOREAS in 1994. Note that the level-0 C-130 transparencies are not contained on the BOREAS CD-ROM set. An inventory file is supplied on the CD-ROM to inform users of all the data that were collected. Some photographic prints were made from the transparencies. In addition, BORIS staff digitized a subset of the tranparencies and stored the images in JPEG format. The CD-ROM set contains a small subset of the collected aerial photography that were the digitally scanned and stored as JPEG files for most tower and auxiliary sites in the NSA and SSA. See Section 15 for information about how to acquire additional imagery.
Preservation and Access to Manuscript Collections of the Czech National Library.
ERIC Educational Resources Information Center
Karen, Vladimir; Psohlavec, Stanislav
In 1996, the Czech National Library started a large-scale digitization of its extensive and invaluable collection of historical manuscripts and printed books. Each page of the selected documents is scanned using a high-resolution, full-color digital camera, processed, and archived on a CD-ROM disk. HTML coded description is added to the entire…
Digital camera auto white balance based on color temperature estimation clustering
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong
2010-11-01
Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.
Teaching optics concepts through an approach that emphasizes the colors of nature
NASA Astrophysics Data System (ADS)
Pompea, Stephen M.; Carsten-Conner, Laura D.
2015-10-01
A wide variety of optics concepts can be taught using the overall perspective of "colors of nature" as a guiding and unifying theme. This approach is attractive and interesting with a wide appeal to children, nature enthusiasts, photographers, and artists. This approach also encourages a deep understanding of the natural world and the role of coloration in biology, remote sensing, the aurora, mineralogy, meteorology, in human-made objects, and astronomy, to name a few. Third, using this theme promotes a close look at optical phenomena at all size scales-from the microscopic (e.g. silica spheres in opals) to the mid-scale (the aurora), to the largest scale (astronomical phenomena such as gaseous emission nebula). Fourth, the natural and human-constructed world provides accessible and beautiful examples of complex phenomena such as interference, diffraction, atomic and molecular emissions, Rayleigh and Mie scattering, illumination engineering, and fluorescence. These areas can be explored successfully in the context of "colors of nature". Finally, using the "colors of nature" also promotes an understanding of technology, from flashlights to streetlights, from telescopes and binoculars, to spectrometers and digital cameras. For examples something as simple as how to set the white balance on a digital camera to get a realistic looking photograph can lead to a lengthy exploration of spectrally selective surfaces and their reflectance, the nature of different illumination sources, the meaning of color temperature, and role of calibration in a digital image. We have used this approach of teaching using the colors of nature as an organizing theme in our NSF-funded project "Project STEAM: Integrating Art with Science to Build Science Identities Among Girls" (colorsofnature.org).
Several considerations with respect to the future of digital photography and photographic printing
NASA Astrophysics Data System (ADS)
Tuijn, Chris; Mahy, Marc F.
2000-12-01
Digital cameras are no longer exotic gadgets being used by a privileged group of early adopters. More and more people realize that there are obvious advantages to the digital solution over the conventional film-based workflow. Claiming that prints on paper are no longer necessary in the digit workflow, however, would be similar to reviving the myth of the paperless office. Often, people still like to share their memories on paper and this for a variety of reasons. There are still some hurdles to be taken in order to make the digital dream com true. In this paper, we will give a survey of the different workflows in digital photography. The local, semi-local and Internet solutions will be discussed as well as the preferred output systems for each of these solutions. When discussing output system, we immediately think of appropriate color management solutions. In the second part of this paper, we will discuss the major color management issues appearing in digital photography. A clear separation between the image acquisition and the image rendering phases will be made. After a quick survey of the different image restoration and enhancement techniques, we will make some reflections on the ideal color exchange space; the enhanced image should be delivered in this exchange space and, from there, the standard color management transformations can be applied to transfer the image from this exchange space to the native color space of the output device. We will also discus some color gamut characteristics and color management problems of different types of photographic printers that can occur during this conversion process.
Bai, Jin-Shun; Cao, Wei-Dong; Xiong, Jing; Zeng, Nao-Hua; Shimizu, Katshyoshi; Rui, Yu-Kui
2013-12-01
In order to explore the feasibility of using the image processing technology to diagnose the nitrogen status and to predict the maize yield, a field experiment with different nitrogen rates with green manure incorporation was conducted. Maize canopy digital images over a range of growth stages were captured by digital camera. Maize nitrogen status and the relationships between image color indices derived by digital camera for maize at different growth stages and maize nitrogen status indicators were analyzed. These digital camera sourced image color indices at different growth stages for maize were also regressed with maize grain yield at maturity. The results showed that the plant nitrogen status for maize was improved by green manure application. The leaf chlorophyll content (SPAD value), aboveground biomass and nitrogen uptake for green manure treatments at different maize growth stages were all higher than that for chemical fertilization treatments. The correlations between spectral indices with plant nitrogen indicators for maize affected by green manure application were weaker than that affected by chemical fertilization. And the correlation coefficients for green manure application were ranged with the maize growth stages changes. The best spectral indices for diagnosis of plant nitrogen status after green manure incorporation were normalized blue value (B/(R+G+B)) at 12-leaf (V12) stage and normalized red value (R/(R+G+B)) at grain-filling (R4) stage individually. The coefficients of determination based on linear regression were 0. 45 and 0. 46 for B/(R+G+B) at V12 stage and R/(R+G+B) at R4 stage respectively, acting as a predictor of maize yield response to nitrogen affected by green manure incorporation. Our findings suggested that digital image technique could be a potential tool for in-season prediction of the nitrogen status and grain yield for maize after green manure incorporation when the suitable growth stages and spectral indices for diagnosis were selected.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
A real-time error-free color-correction facility for digital consumers
NASA Astrophysics Data System (ADS)
Shaw, Rodney
2008-01-01
It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
Applications of a digital darkroom in the forensic laboratory
NASA Astrophysics Data System (ADS)
Bullard, Barry D.; Birge, Brian
1997-02-01
Through a joint agreement with the Indiana-Marion County Forensic Laboratory Services Agency, the Institute for Forensic Imaging conducted a pilot program to investigate crime lab applications of a digital darkroom. IFI installed and staffed a state-of-the-art digital darkroom in the photography laboratory of the Indianapolis-Marion County crime lab located at Indianapolis, Indiana. The darkroom consisted of several high resolution color digital cameras, image processing computer, dye sublimation continuous tone digital printers, and CD-ROM writer. This paper describes the use of the digital darkroom in several crime lab investigations conducted during the program.
Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo
2018-03-01
Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the anthocyanin concentrations in both floral and vegetative tissues. This method is efficient, completely noninvasive, applicable to both uniform and patterned color, and works with samples of any size.
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Peteye detection and correction
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Luo, Huitao; Tretter, Daniel
2007-01-01
Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.
Note: In vivo pH imaging system using luminescent indicator and color camera
NASA Astrophysics Data System (ADS)
Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko
2012-07-01
Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.
NASA Astrophysics Data System (ADS)
Gao, Xinya; Wang, Yonghong; Li, Junrui; Dan, Xizuo; Wu, Sijin; Yang, Lianxiang
2017-06-01
It is difficult to measure absolute three-dimensional deformation using traditional digital speckle pattern interferometry (DSPI) when the boundary condition of an object being tested is not exactly given. In practical applications, the boundary condition cannot always be specifically provided, limiting the use of DSPI in real-world applications. To tackle this problem, a DSPI system that is integrated by the spatial carrier method and a color camera has been established. Four phase maps are obtained simultaneously by spatial carrier color-digital speckle pattern interferometry using four speckle interferometers with different illumination directions. One out-of-plane and two in-plane absolute deformations can be acquired simultaneously without knowing the boundary conditions using the absolute deformation extraction algorithm based on four phase maps. Finally, the system is proved by experimental results through measurement of the deformation of a flat aluminum plate with a groove.
Real ePublishing, Really Publishing! How To Create Digital Books by and for All Ages.
ERIC Educational Resources Information Center
Condon, Mark W. F.; McGuffee, Michael
This book aims to help teachers turn their classrooms into their very own publishing companies. All that is needed is a computer, a word processor, a digital camera, a color printer, and access to the Internet. The book explains that a new genre of publication, Webbes, are simple nonfiction picture books, and that, matching meaningful text with…
Contactless physiological signals extraction based on skin color magnification
NASA Astrophysics Data System (ADS)
Suh, Kun Ha; Lee, Eui Chul
2017-11-01
Although the human visual system is not sufficiently sensitive to perceive blood circulation, blood flow caused by cardiac activity makes slight changes on human skin surfaces. With advances in imaging technology, it has become possible to capture these changes through digital cameras. However, it is difficult to obtain clear physiological signals from such changes due to its fineness and noise factors, such as motion artifacts and camera sensing disturbances. We propose a method for extracting physiological signals with improved quality from skin colored-videos recorded with a remote RGB camera. The results showed that our skin color magnification method reveals the hidden physiological components remarkably in the time-series signal. A Korea Food and Drug Administration-approved heart rate monitor was used for verifying the resulting signal synchronized with the actual cardiac pulse, and comparisons of signal peaks showed correlation coefficients of almost 1.0. In particular, our method can be an effective preprocessing before applying additional postfiltering techniques to improve accuracy in image-based physiological signal extractions.
NASA Astrophysics Data System (ADS)
Russell, E.; Chi, J.; Waldo, S.; Pressley, S. N.; Lamb, B. K.; Pan, W.
2017-12-01
Diurnal and seasonal gas fluxes vary by crop growth stage. Digital cameras are increasingly being used to monitor inter-annual changes in vegetation phenology in a variety of ecosystems. These cameras are not designed as scientific instruments but the information they gather can add value to established measurement techniques (i.e. eddy covariance). This work combined deconstructed digital images with eddy covariance data from five agricultural sites (1 fallow, 4 cropped) in the inland Pacific Northwest, USA. The data were broken down with respect to crop stage and management activities. The fallow field highlighted the camera response to changing net radiation, illumination, and rainfall. At the cropped sites, the net ecosystem exchange, gross primary production, and evapotranspiration were correlated with the greenness and redness values derived from the images over the growing season. However, the color values do not change quickly enough to respond to day-to-day variability in the flux exchange as the two measurement types are based on different processes. The management practices and changes in phenology through the growing season were not visible within the camera data though the camera did capture the general evolution of the ecosystem fluxes.
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Decomposed Photo Response Non-Uniformity for Digital Forensic Analysis
NASA Astrophysics Data System (ADS)
Li, Yue; Li, Chang-Tsun
The last few years have seen the applications of Photo Response Non-Uniformity noise (PRNU) - a unique stochastic fingerprint of image sensors, to various types of digital forensic investigations such as source device identification and integrity verification. In this work we proposed a new way of extracting PRNU noise pattern, called Decomposed PRNU (DPRNU), by exploiting the difference between the physical andartificial color components of the photos taken by digital cameras that use a Color Filter Array for interpolating artificial components from physical ones. Experimental results presented in this work have shown the superiority of the proposed DPRNU to the commonly used version. We also proposed a new performance metrics, Corrected Positive Rate (CPR) to evaluate the performance of the common PRNU and the proposed DPRNU.
Computer Vision Research and Its Applications to Automated Cartography
1984-09-01
reflecting from scene surfaces, and the film and digitization processes that result in the computer representation of the image. These models, when...alone. Specifically, intepretations that are in some sense "orthogonal" are preferred. A method for finding such interpretations for right-angle...saturated colors are not precisely representable and the colors recorded with different films or cameras may differ, but the tricomponent representation is t
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
Quantification of Soil Redoximorphic Features by Standardized Color Identification
USDA-ARS?s Scientific Manuscript database
Photography has been a welcome tool in assisting to document and convey qualitative soil information. Greater availability of digital cameras with increased information storage capabilities has promoted novel uses of this technology in investigations of water movement patterns, organic matter conte...
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
ColorChecker at the beach: dangers of sunburn and glare
NASA Astrophysics Data System (ADS)
McCann, John
2014-01-01
In High-Dynamic-Range (HDR) imaging, optical veiling glare sets the limits of accurate scene information recorded by a camera. But, what happens at the beach? Here we have a Low-Dynamic-Range (LDR) scene with maximal glare. Can we calibrate a camera at the beach and not be burnt? We know that we need sunscreen and sunglasses, but what about our cameras? The effect of veiling glare is scene-dependent. When we compare RAW camera digits with spotmeter measurements we find significant differences. As well, these differences vary, depending on where we aim the camera. When we calibrate our camera at the beach we get data that is valid for only that part of that scene. Camera veiling glare is an issue in LDR scenes in uniform illumination with a shaded lens.
ERIC Educational Resources Information Center
Garcia, Lilia
2000-01-01
While arts facilities should be equipped with computers, color scanners, MIDI (Musical Instrument Digital Interface) labs, connective video cameras, and appropriate software, music rooms still need pianos and visual art rooms need traditional art supplies. Dade County (Florida) Schools's pilot teacher assistance projects and arts-centered schools…
Apyari, V V; Dmitrienko, S G; Ostrovskaya, V M; Anaev, E K; Zolotov, Y A
2008-07-01
Polyurethane foam (PUF) has been suggested as a solid polymeric reagent for determination of nitrite. The determination is based on the diazotization of end toluidine groups of PUF with nitrite in acidic medium followed by coupling of polymeric diazonium cation with 3-hydroxy-7,8-benzo-1,2,3,4-tetrahydroquinoline. The intensely colored polymeric azodye formed in this reaction can be used as a convenient analytic form for the determination of nitrite by diffuse reflectance spectroscopy (c (min) = 0.7 ng mL(-1)). The possibility of using a desktop scanner, digital camera, and computer data processing for the numerical evaluation of the color intensity of the polymeric azodye has been investigated. A scanner and digital camera can be used for determination of nitrite with the same sensitivity and reproducibility as with diffuse reflectance spectroscopy. The approach developed was applied for determination of nitrite in river water and human exhaled breath condensate.
Single camera photogrammetry system for EEG electrode identification and localization.
Baysal, Uğur; Sengül, Gökhan
2010-04-01
In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.
Enhancement of the Shared Graphics Workspace.
1987-12-31
participants to share videodisc images and computer graphics displayed in color and text and facsimile information displayed in black on amber. They...could annotate the information in up to five * colors and print the annotated version at both sites, using a standard fax machine. The SGWS also used a fax...system to display a document, whether text or photo, the camera scans the document, digitizes the data, and sends it via direct memory access (DMA) to
Smart Camera Technology Increases Quality
NASA Technical Reports Server (NTRS)
2004-01-01
When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.
Dai, Meiling; Yang, Fujun; He, Xiaoyuan
2012-04-20
A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.
Matsunaga, Tomoko M; Ogawa, Daisuke; Taguchi-Shiobara, Fumio; Ishimoto, Masao; Matsunaga, Sachihiro; Habu, Yoshiki
2017-06-01
Leaf color is an important indicator when evaluating plant growth and responses to biotic/abiotic stress. Acquisition of images by digital cameras allows analysis and long-term storage of the acquired images. However, under field conditions, where light intensity can fluctuate and other factors (shade, reflection, and background, etc.) vary, stable and reproducible measurement and quantification of leaf color are hard to achieve. Digital scanners provide fixed conditions for obtaining image data, allowing stable and reliable comparison among samples, but require detached plant materials to capture images, and the destructive processes involved often induce deformation of plant materials (curled leaves and faded colors, etc.). In this study, by using a lightweight digital scanner connected to a mobile computer, we obtained digital image data from intact plant leaves grown in natural-light greenhouses without detaching the targets. We took images of soybean leaves infected by Xanthomonas campestris pv. glycines , and distinctively quantified two disease symptoms (brown lesions and yellow halos) using freely available image processing software. The image data were amenable to quantitative and statistical analyses, allowing precise and objective evaluation of disease resistance.
NASA Astrophysics Data System (ADS)
Dietrich, Volker; Hartmann, Peter; Kerz, Franca
2015-03-01
Digital cameras are present everywhere in our daily life. Science, business or private life cannot be imagined without digital images. The quality of an image is often rated by its color rendering. In order to obtain a correct color recognition, a near infrared cut (IRC-) filter must be used to alter the sensitivity of imaging sensor. Increasing requirements related to color balance and larger angle of incidence (AOI) enforced the use of new materials as the e.g. BG6X series which substitutes interference coated filters on D263 thin glass. Although the optical properties are the major design criteria, devices have to withstand numerous environmental conditions during use and manufacturing - as e.g. temperature change, humidity, and mechanical shock, as wells as mechanical stress. The new materials show different behavior with respect to all these aspects. They are usually more sensitive against these requirements to a larger or smaller extent. Mechanical strength is especially different. Reliable strength data are of major interest for mobile phone camera applications. As bending strength of a glass component depends not only upon the material itself, but mainly on the surface treatment and test conditions, a single number for the strength might be misleading if the conditions of the test and the samples are not described precisely,. Therefore, Schott started investigations upon the bending strength data of various IRC-filter materials. Different test methods were used to obtain statistical relevant data.
Evaluation of color grading impact in restoration process of archive films
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Janout, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2016-09-01
Color grading of archive films is a very particular task in the process of their restoration. The ultimate goal of color grading here is to achieve the same look of the movie as intended at the time of its first presentation. The role of the expert restorer, expert group and a digital colorist in this complicated process is to find the optimal settings of the digital color grading system so that the resulting image look is as close as possible to the estimate of the original reference release print adjusted by the expert group of cinematographers. A methodology for subjective assessment of perceived differences between the outcomes of color grading is introduced, and results of a subjective study are presented. Techniques for objective assessment of perceived differences are discussed, and their performance is evaluated using ground truth obtained from the subjective experiment. In particular, a solution based on calibrated digital single-lens reflex camera and subsequent analysis of image features captured from the projection screen is described. The system based on our previous work is further developed so that it can be used for the analysis of projected images. It allows assessing color differences in these images and predict their impact on the perceived difference in image look.
Quantitative analysis of transmittance and photoluminescence using a low cost apparatus
NASA Astrophysics Data System (ADS)
Onorato, P.; Malgieri, M.; De Ambrosis, A.
2016-01-01
We show how a low cost spectrometer, based on the use of inexpensive diffraction transmission gratings coupled with a commercial digital photo camera or a cellphone, can be assembled and employed to obtain quantitative spectra of different sources. In particular, we discuss its use in studying the spectra of fluorescent colored ink, used in highlighting pens, for which the transmission band and the emission peaks are measured and related to the ink color.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
Robust tissue classification for reproducible wound assessment in telemedicine environments
NASA Astrophysics Data System (ADS)
Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves
2010-04-01
In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.
Voss in Service Module with apples
2001-03-22
ISS002-E-5710 (22 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, appears to be trying to decide between two colors or two species of apples as he ponders them in the Zvezda Service Module on the International Space Station (ISS). This photo was taken with a digital still camera.
Noor, M Omair; Krull, Ulrich J
2014-10-21
Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an important framework for the integration of QD-FRET methods with digital imaging for a ratiometric transduction of nucleic acid hybridization on a paper-based platform.
NASA Astrophysics Data System (ADS)
Yang, Xi; Tang, Jianwu; Mustard, John F.
2014-03-01
Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.
Video and thermal imaging system for monitoring interiors of high temperature reaction vessels
Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL
2012-01-10
A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.
Martinez, Andres W; Phillips, Scott T; Carrilho, Emanuel; Thomas, Samuel W; Sindi, Hayat; Whitesides, George M
2008-05-15
This article describes a prototype system for quantifying bioassays and for exchanging the results of the assays digitally with physicians located off-site. The system uses paper-based microfluidic devices for running multiple assays simultaneously, camera phones or portable scanners for digitizing the intensity of color associated with each colorimetric assay, and established communications infrastructure for transferring the digital information from the assay site to an off-site laboratory for analysis by a trained medical professional; the diagnosis then can be returned directly to the healthcare provider in the field. The microfluidic devices were fabricated in paper using photolithography and were functionalized with reagents for colorimetric assays. The results of the assays were quantified by comparing the intensities of the color developed in each assay with those of calibration curves. An example of this system quantified clinically relevant concentrations of glucose and protein in artificial urine. The combination of patterned paper, a portable method for obtaining digital images, and a method for exchanging results of the assays with off-site diagnosticians offers new opportunities for inexpensive monitoring of health, especially in situations that require physicians to travel to patients (e.g., in the developing world, in emergency management, and during field operations by the military) to obtain diagnostic information that might be obtained more effectively by less valuable personnel.
NASA Astrophysics Data System (ADS)
Taj-Eddin, Islam A. T. F.; Afifi, Mahmoud; Korashy, Mostafa; Ahmed, Ali H.; Cheng, Ng Yoke; Hernandez, Evelyng; Abdel-Latif, Salma M.
2017-11-01
Plant aliveness is proven through laboratory experiments and special scientific instruments. We aim to detect the degree of animation of plants based on the magnification of the small color changes in the plant's green leaves using the Eulerian video magnification. Capturing the video under a controlled environment, e.g., using a tripod and direct current light sources, reduces camera movements and minimizes light fluctuations; we aim to reduce the external factors as much as possible. The acquired video is then stabilized and a proposed algorithm is used to reduce the illumination variations. Finally, the Euler magnification is utilized to magnify the color changes on the light invariant video. The proposed system does not require any special purpose instruments as it uses a digital camera with a regular frame rate. The results of magnified color changes on both natural and plastic leaves show that the live green leaves have color changes in contrast to the plastic leaves. Hence, we can argue that the color changes of the leaves are due to biological operations, such as photosynthesis. To date, this is possibly the first work that focuses on interpreting visually, some biological operations of plants without any special purpose instruments.
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area.
Easlon, Hsien Ming; Bloom, Arnold J
2014-07-01
Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. • Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. • Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.
McGregor, T J; Spence, D J; Coutts, D W
2008-01-01
We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.
An automated system for whole microscopic image acquisition and analysis.
Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús
2014-09-01
The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.
Chromatic aberration correction: an enhancement to the calibration of low-cost digital dermoscopes.
Wighton, Paul; Lee, Tim K; Lui, Harvey; McLean, David; Atkins, M Stella
2011-08-01
We present a method for calibrating low-cost digital dermoscopes that corrects for color and inconsistent lighting and also corrects for chromatic aberration. Chromatic aberration is a form of radial distortion that often occurs in inexpensive digital dermoscopes and creates red and blue halo-like effects on edges. Being radial in nature, distortions due to chromatic aberration are not constant across the image, but rather vary in both magnitude and direction. As a result, distortions are not only visually distracting but could also mislead automated characterization techniques. Two low-cost dermoscopes, based on different consumer-grade cameras, were tested. Color is corrected by imaging a reference and applying singular value decomposition to determine the transformation required to ensure accurate color reproduction. Lighting is corrected by imaging a uniform surface and creating lighting correction maps. Chromatic aberration is corrected using a second-order radial distortion model. Our results for color and lighting calibration are consistent with previously published results, while distortions due to chromatic aberration can be reduced by 42-47% in the two systems considered. The disadvantages of inexpensive dermoscopy can be quickly substantially mitigated with a suitable calibration procedure. © 2011 John Wiley & Sons A/S.
Full-color OLED on silicon microdisplay
NASA Astrophysics Data System (ADS)
Ghosh, Amalkumar P.
2002-02-01
eMagin has developed numerous enhancements to organic light emitting diode (OLED) technology, including a unique, up- emitting structure for OLED-on-silicon microdisplay devices. Recently, eMagin has fabricated full color SVGA+ resolution OLED microdisplays on silicon, with over 1.5 million color elements. The display is based on white light emission from OLED followed by LCD-type red, green and blue color filters. The color filters are patterned directly on OLED devices following suitable thin film encapsulation and the drive circuits are built directly on single crystal silicon. The resultant color OLED technology, with hits high efficiency, high brightness, and low power consumption, is ideally suited for near to the eye applications such as wearable PCS, wireless Internet applications and mobile phone, portable DVD viewers, digital cameras and other emerging applications.
Non-contact measurement of pulse wave velocity using RGB cameras
NASA Astrophysics Data System (ADS)
Nakano, Kazuya; Aoki, Yuta; Satoh, Ryota; Hoshi, Akira; Suzuki, Hiroyuki; Nishidate, Izumi
2016-03-01
Non-contact measurement of pulse wave velocity (PWV) using red, green, and blue (RGB) digital color images is proposed. Generally, PWV is used as the index of arteriosclerosis. In our method, changes in blood volume are calculated based on changes in the color information, and is estimated by combining multiple regression analysis (MRA) with a Monte Carlo simulation (MCS) model of the transit of light in human skin. After two pulse waves of human skins were measured using RGB cameras, and the PWV was calculated from the difference of the pulse transit time and the distance between two measurement points. The measured forehead-finger PWV (ffPWV) was on the order of m/s and became faster as the values of vital signs raised. These results demonstrated the feasibility of this method.
Estimation of saturated pixel values in digital color imaging
Zhang, Xuemei; Brainard, David H.
2007-01-01
Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065
Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images
NASA Astrophysics Data System (ADS)
Kruschwitz, Jennifer D. T.
Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Evaluation of a hyperspectral image database for demosaicking purposes
NASA Astrophysics Data System (ADS)
Larabi, Mohamed-Chaker; Süsstrunk, Sabine
2011-01-01
We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
Development of an imaging method for quantifying a large digital PCR droplet
NASA Astrophysics Data System (ADS)
Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang
2017-02-01
Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.
2009-08-01
habitat analysis because of the high horizontal error between the mosaicked image tiles . The imagery was collected with a non-metric camera and likewise...possible with true color imagery (digital orthophotos ) or multispectral imagery, but usually comes at a much higher cost. Due to its availability and
A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies
NASA Astrophysics Data System (ADS)
Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.
2015-12-01
Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.
Color and Contour Based Identification of Stem of Coconut Bunch
NASA Astrophysics Data System (ADS)
Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin
2017-08-01
Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.
Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng
2018-06-01
A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
NASA Astrophysics Data System (ADS)
Silva, T. S. F.; Torres, R. S.; Morellato, P.
2017-12-01
Vegetation phenology is a key component of ecosystem function and biogeochemical cycling, and highly susceptible to climatic change. Phenological knowledge in the tropics is limited by lack of monitoring, traditionally done by laborious direct observation. Ground-based digital cameras can automate daily observations, but also offer limited spatial coverage. Imaging by low-cost Unmanned Aerial Systems (UAS) combines the fine resolution of ground-based methods with and unprecedented capability for spatial coverage, but challenges remain in producing color-consistent multitemporal images. We evaluated the applicability of multitemporal UAS imaging to monitor phenology in tropical altitudinal grasslands and forests, answering: 1) Can very-high resolution aerial photography from conventional digital cameras be used to reliably monitor vegetative and reproductive phenology? 2) How is UAS monitoring affected by changes in illumination and by sensor physical limitations? We flew imaging missions monthly from Feb-16 to Feb-17, using a UAS equipped with an RGB Canon SX260 camera. Flights were carried between 10am and 4pm, at 120-150m a.g.l., yielding 5-10cm spatial resolution. To compensate illumination changes caused by time of day, season and cloud cover, calibration was attempted using reference targets and empirical models, as well as color space transformations. For vegetative phenological monitoring, multitemporal response was severely affected by changes in illumination conditions, strongly confounding the phenological signal. These variations could not be adequately corrected through calibration due to sensor limitations. For reproductive phenology, the very-high resolution of the acquired imagery allowed discrimination of individual reproductive structures for some species, and its stark colorimetric differences to vegetative structures allowed detection of the reproductive timing on the HSV color space, despite illumination effects. We conclude that reliable vegetative phenology monitoring may exceed the capabilities of consumer cameras, but reproductive phenology can be successfully monitored for species with conspicuous reproductive structures. Further research is being conducted to improve calibration methods and information extraction through machine learning.
Spatial super-resolution of colored images by micro mirrors
NASA Astrophysics Data System (ADS)
Dahan, Daniel; Yaacobi, Ami; Pinsky, Ephraim; Zalevsky, Zeev
2018-06-01
In this paper, we present two methods of dealing with the geometric resolution limit of color imaging sensors. It is possible to overcome the pixel size limit by adding a digital micro-mirror device component on the intermediate image plane of an optical system, and adapting its pattern in a computerized manner before sampling each frame. The full RGB image can be reconstructed from the Bayer camera by building a dedicated optical design, or by adjusting the demosaicing process to the special format of the enhanced image.
How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2011-01-01
In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.
Age estimation of bloodstains using smartphones and digital image analysis.
Thanakiatkrai, Phuvadol; Yaodam, Alisa; Kitpipit, Thitika
2013-12-10
Recent studies on bloodstains have focused on determining the time since deposition of bloodstains, which can provide useful temporal information to forensic investigations. This study is the first to use smartphone cameras in combination with a truly low-cost illumination system as a tool to estimate the age of bloodstains. Bloodstains were deposited on various substrates and photographed with a smartphone camera. Three smartphones (Samsung Galaxy S Plus, Apple iPhone 4, and Apple iPad 2) were compared. The environmental effects - temperature, humidity, light exposure, and anticoagulant - on the bloodstain age estimation process were explored. The color values from the digital images were extracted and correlated with time since deposition. Magenta had the highest correlation (R(2)=0.966) and was used in subsequent experiments. The Samsung Galaxy S Plus was the most suitable smartphone as its magenta decreased exponentially with increasing time and had highest repeatability (low variation within and between pictures). The quantifiable color change observed is consistent with well-established hemoglobin denaturation process. Using a statistical classification technique called Random Forests™, we could predict bloodstain age accurately up to 42 days with an error rate of 12%. Additionally, the age of forty blind stains were all correctly predicted, and 83% of mock casework samples were correctly classified. No within- and between-person variations were observed (p>0.05), while smartphone camera, temperature, humidity, and substrate color influenced the age determination process in different ways. Our technique provides a cheap, rapid, easy-to-use, and truly portable alternative to more complicated analysis using specialized equipment, e.g. spectroscopy and HPLC. No training is necessary with our method, and we envision a smartphone application that could take user inputs of environmental factors and provide an accurate estimate of bloodstain age. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Viking image processing. [digital stereo imagery and computer mosaicking
NASA Technical Reports Server (NTRS)
Green, W. B.
1977-01-01
The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.
Roberti, Joshua A.; SanClements, Michael D.; Loescher, Henry W.; Ayres, Edward
2014-01-01
Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023
ERIC Educational Resources Information Center
Adelhelm, Manfred; Aristov, Natasha; Habekost, Achim
2010-01-01
The physical properties of oxygen, in particular, the blue color of the liquid phase, the red glow of its chemiluminescence, and its paramagnetism as shown by the entrapment or deflection of liquid oxygen by a magnetic field, can be investigated in a regular school setting with hand-held spectrophotometers and digital cameras. In college-level…
An optical watermarking solution for color personal identification pictures
NASA Astrophysics Data System (ADS)
Tan, Yi-zhou; Liu, Hai-bo; Huang, Shui-hua; Sheng, Ben-jian; Pan, Zhong-ming
2009-11-01
This paper presents a new approach for embedding authentication information into image on printed materials based on optical projection technique. Our experimental setup consists of two parts, one is a common camera, and the other is a LCD projector, which project a pattern on personnel's body (especially on the face). The pattern, generated by a computer, act as the illumination light source with sinusoidal distribution and it is also the watermark signal. For a color image, the watermark is embedded into the blue channel. While we take pictures (256×256 and 512×512, 567×390 pixels, respectively), an invisible mark is embedded directly into magnitude coefficients of Discrete Fourier transform (DFT) at exposure moment. Both optical and digital correlation is suitable for detection of this type of watermark. The decoded watermark is a set of concentric circles or sectors in the DFT domain (middle frequencies region) which is robust to photographing, printing and scanning. The unlawful people modify or replace the original photograph, and make fake passport (drivers' license and so on). Experiments show, it is difficult to forge certificates in which a watermark was embedded by our projector-camera combination based on analogue watermark method rather than classical digital method.
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
Colorimetric calibration of wound photography with off-the-shelf devices
NASA Astrophysics Data System (ADS)
Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.
The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor
NASA Astrophysics Data System (ADS)
Benton, J. R.; Corbett, F.; Tuft, R.
1980-01-01
An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1998-01-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
NASA Astrophysics Data System (ADS)
Costa, Manuel F.; Jorge, Jorge M.
1997-12-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
Monitoring of degradation of porous silicon photonic crystals using digital photography
2014-01-01
We report the monitoring of porous silicon (pSi) degradation in aqueous solutions using a consumer-grade digital camera. To facilitate optical monitoring, the pSi samples were prepared as one-dimensional photonic crystals (rugate filters) by electrochemical etching of highly doped p-type Si wafers using a periodic etch waveform. Two pSi formulations, representing chemistries relevant for self-reporting drug delivery applications, were tested: freshly etched pSi (fpSi) and fpSi coated with the biodegradable polymer chitosan (pSi-ch). Accelerated degradation of the samples in an ethanol-containing pH 10 aqueous basic buffer was monitored in situ by digital imaging with a consumer-grade digital camera with simultaneous optical reflectance spectrophotometric point measurements. As the nanostructured porous silicon matrix dissolved, a hypsochromic shift in the wavelength of the rugate reflectance peak resulted in visible color changes from red to green. While the H coordinate in the hue, saturation, and value (HSV) color space calculated using the as-acquired photographs was a good monitor of degradation at short times (t < 100 min), it was not a useful monitor of sample degradation at longer times since it was influenced by reflections of the broad spectral output of the lamp as well as from the narrow rugate reflectance band. A monotonic relationship was observed between the wavelength of the rugate reflectance peak and an H parameter value calculated from the average red-green-blue (RGB) values of each image by first independently normalizing each channel (R, G, and B) using their maximum and minimum value over the time course of the degradation process. Spectrophotometric measurements and digital image analysis using this H parameter gave consistent relative stabilities of the samples as fpSi > pSi-ch. PMID:25242902
Auto white balance method using a pigmentation separation technique for human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2017-02-01
The human visual system maintains the perception of colors of an object across various light sources. Similarly, current digital cameras feature an auto white balance function, which estimates the illuminant color and corrects the color of a photograph as if the photograph was taken under a certain light source. The main subject in a photograph is often a person's face, which could be used to estimate the illuminant color. However, such estimation is adversely affected by differences in facial colors among individuals. The present paper proposes an auto white balance algorithm based on a pigmentation separation method that separates the human skin color image into the components of melanin, hemoglobin and shading. Pigment densities have a uniform property within the same race that can be calculated from the components of melanin and hemoglobin in the face. We, thus, propose a method that uses the subject's facial color in an image and is unaffected by individual differences in facial color among Japanese people.
Sun, X; Chen, K J; Berg, E P; Newman, D J; Schwartz, C A; Keller, W L; Maddock Carlin, K R
2014-02-01
The objective was to use digital color image texture features to predict troponin-T degradation in beef. Image texture features, including 88 gray level co-occurrence texture features, 81 two-dimension fast Fourier transformation texture features, and 48 Gabor wavelet filter texture features, were extracted from color images of beef strip steaks (longissimus dorsi, n = 102) aged for 10d obtained using a digital camera and additional lighting. Steaks were designated degraded or not-degraded based on troponin-T degradation determined on d 3 and d 10 postmortem by immunoblotting. Statistical analysis (STEPWISE regression model) and artificial neural network (support vector machine model, SVM) methods were designed to classify protein degradation. The d 3 and d 10 STEPWISE models were 94% and 86% accurate, respectively, while the d 3 and d 10 SVM models were 63% and 71%, respectively, in predicting protein degradation in aged meat. STEPWISE and SVM models based on image texture features show potential to predict troponin-T degradation in meat. © 2013.
Food color and appearance measurement, specification and communication, can we do better?
NASA Astrophysics Data System (ADS)
Hutchings, John; Singleton, Mark; Plater, Keith; Dias, Benjamin
2002-06-01
Conventional methods of color specification demand a sample that is flat, uniformly colored, diffusely reflecting and opaque. Very many natural, processed and manufactured foods, on the other hand, are three-dimensional, irregularly shaped unevenly colored and translucent. Hence, spectrophotometers and tristimulus colorimeters can only be used for reliable and accurate color measurement in certain cases and under controlled conditions. These techniques are certainly unsuitable for specification of color patterning and other factors of total appearance in which, for example, surface texture and gloss interfere with the surface color. Hence, conventional techniques are more appropriate to food materials than to foods themselves. This paper reports investigations on the application of digital camera and screen technologies to these problems. Results indicated that accuracy sufficient for wide scale use in the food industry is obtainable. Measurement applications include the specification and automatic measurement and classification of total appearance properties of three-dimensional products. This will be applicable to specification and monitoring of fruit and vegetables within the growing, storage and marketing supply chain and to on-line monitoring. Applications to sensory panels include monitoring of color and appearance changes occurring during paneling and the development of physical reference scales based pigment chemistry changes. Digital technology will be extendable to the on-screen judging of real and virtual products as well as to the improvement of appearance archiving and communication.
Intra- and inter-rater reliability of digital image analysis for skin color measurement
Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison
2013-01-01
Background We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Methods Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe® Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor® in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Conclusion Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. PMID:23551208
Intra- and inter-rater reliability of digital image analysis for skin color measurement.
Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison
2013-11-01
We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe(®) Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor(®) in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Error rate of automated calculation for wound surface area using a digital photography.
Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H
2018-02-01
Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
High-speed potato grading and quality inspection based on a color vision system
NASA Astrophysics Data System (ADS)
Noordam, Jacco C.; Otten, Gerwoud W.; Timmermans, Toine J. M.; van Zwol, Bauke H.
2000-03-01
A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
NASA Astrophysics Data System (ADS)
Pozo, Antonio M.; Rubiño, Manuel; Hernández-Andrés, Javier; Nieves, Juan L.
2014-07-01
In this work, we present a teaching methodology using active-learning techniques in the course "Devices and Instrumentation" of the Erasmus Mundus Master's Degree in "Color in Informatics and Media Technology" (CIMET). A part of the course "Devices and Instrumentation" of this Master's is dedicated to the study of image sensors and methods to evaluate their image quality. The teaching methodology that we present consists of incorporating practical activities during the traditional lectures. One of the innovative aspects of this teaching methodology is that students apply the concepts and methods studied in class to real devices. For this, students use their own digital cameras, webcams, or cellphone cameras in class. These activities provide students a better understanding of the theoretical subject given in class and encourage the active participation of students.
Application of 3D reconstruction system in diabetic foot ulcer injury assessment
NASA Astrophysics Data System (ADS)
Li, Jun; Jiang, Li; Li, Tianjian; Liang, Xiaoyao
2018-04-01
To deal with the considerable deviation of transparency tracing method and digital planimetry method used in current clinical diabetic foot ulcer injury assessment, this paper proposes a 3D reconstruction system which can be used to get foot model with good quality texture, then injury assessment is done by measuring the reconstructed model. The system uses the Intel RealSense SR300 depth camera which is based on infrared structured-light as input device, the required data from different view is collected by moving the camera around the scanned object. The geometry model is reconstructed by fusing the collected data, then the mesh is sub-divided to increase the number of mesh vertices and the color of each vertex is determined using a non-linear optimization, all colored vertices compose the surface texture of the reconstructed model. Experimental results indicate that the reconstructed model has millimeter-level geometric accuracy and texture with few artificial effect.
Securing Color Fidelity in 3D Architectural Heritage Scenarios.
Gaiani, Marco; Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio
2017-10-25
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy ('color characterization').
Securing Color Fidelity in 3D Architectural Heritage Scenarios
Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio
2017-01-01
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’). PMID:29068359
Image quality evaluation of medical color and monochrome displays using an imaging colorimeter
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses full-frame CCDs with 100% fill factor which makes it very suitable to measure luminance and chrominance of individual LCD pixels and sub-pixels on an LCD display. The CCDs used are 14-bit thermoelectrically cooled and temperature stabilized , scientific grade.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Computer image processing - The Viking experience. [digital enhancement techniques
NASA Technical Reports Server (NTRS)
Green, W. B.
1977-01-01
Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.
Demosaicking algorithm for the Kodak-RGBW color filter array
NASA Astrophysics Data System (ADS)
Rafinazari, M.; Dubois, E.
2015-01-01
Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.
1990-02-14
Range : 4 billion miles from Earth, at 32 degrees to the ecliptic. P-36057C This color image of the Sun, Earth, and Venus is one of the first, and maybe, only images that show are solar system from such a vantage point. The image is a portion of a wide angle image containing the sun and the region of space where the Earth and Venus were at the time, with narrow angle cameras centered on each planet. The wide angle was taken with the cameras darkest filter, a methane absorption band, and the shortest possible exposure, one two-hundredth of a second, to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky, as seen from Voyager's perpective at the edge of the solar system. Yet, it is still 8xs brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics of the camera. The rays around th sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. the 2 narrow angle frames containing the images of the Earth and Venus have been digitally mosaicked into the wide angle image at the appropriate scale. These images were taken through three color filters and recombined to produce the color image. The violet, green, and blue filters used , as well as exposure times of .72,.48, and .72 for Earth, and .36, .24, and .36 for Venus.The images also show long linear streaks resulting from scatering of sulight off parts of the camera and its shade.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.
2017-07-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.
[A Method for Selecting Self-Adoptive Chromaticity of the Projected Markers].
Zhao, Shou-bo; Zhang, Fu-min; Qu, Xing-hua; Zheng, Shi-wei; Chen, Zhe
2015-04-01
The authors designed a self-adaptive projection system which is composed of color camera, projector and PC. In detail, digital micro-mirror device (DMD) as a spatial light modulator for the projector was introduced in the optical path to modulate the illuminant spectrum based on red, green and blue light emitting diodes (LED). However, the color visibility of active markers is affected by the screen which has unknown reflective spectrum as well. Here active markers are projected spot array. And chromaticity feature of markers is sometimes submerged in similar spectral screen. In order to enhance the color visibility of active markers relative to screen, a method for selecting self-adaptive chromaticity of the projected markers in 3D scanning metrology is described. Color camera with 3 channels limits the accuracy of device characterization. For achieving interconversion of device-independent color space and device-dependent color space, high-dimensional linear model of reflective spectrum was built. Prior training samples provide additional constraints to yield high-dimensional linear model with more than three degrees of freedom. Meanwhile, spectral power distribution of ambient light was estimated. Subsequently, markers' chromaticity in CIE color spaces was selected via maximization principle of Euclidean distance. The setting values of RGB were easily estimated via inverse transform. Finally, we implemented a typical experiment to show the performance of the proposed approach. An 24 Munsell Color Checker was used as projective screen. Color difference in the chromaticity coordinates between the active marker and the color patch was utilized to evaluate the color visibility of active markers relative to the screen. The result comparison between self-adaptive projection system and traditional diode-laser light projector was listed and discussed to highlight advantage of our proposed method.
Improving wavelet denoising based on an in-depth analysis of the camera color processing
NASA Astrophysics Data System (ADS)
Seybold, Tamara; Plichta, Mathias; Stechele, Walter
2015-02-01
While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Environmental fog/rain visual display system for aircraft simulators
NASA Technical Reports Server (NTRS)
Chase, W. D. (Inventor)
1982-01-01
An environmental fog/rain visual display system for aircraft simulators is described. The electronic elements of the system include a real time digital computer, a caligraphic color display which simulates landing lights of selective intensity, and a color television camera for producing a moving color display of the airport runway as depicted on a model terrain board. The mechanical simulation elements of the system include an environmental chamber which can produce natural fog, nonhomogeneous fog, rain and fog combined, or rain only. A pilot looking through the aircraft wind screen will look through the fog and/or rain generated in the environmental chamber onto a viewing screen with the simulated color image of the airport runway thereon, and observe a very real simulation of actual conditions of a runway as it would appear through actual fog and/or rain.
Improved wheal detection from skin prick test images
NASA Astrophysics Data System (ADS)
Bulan, Orhan
2014-03-01
Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.
Smartphone based Tomographic PIV using colored shadows
NASA Astrophysics Data System (ADS)
Aguirre-Pablo, Andres A.; Alarfaj, Meshal K.; Li, Er Qiang; Thoroddsen, Sigurdur T.
2016-11-01
We use low-cost smartphones and Tomo-PIV, to reconstruct the 3D-3C velocity field of a vortex ring. The experiment is carried out in an octagonal tank of water with a vortex ring generator consisting of a flexible membrane enclosed by a cylindrical chamber. This chamber is pre-seeded with black polyethylene microparticles. The membrane is driven by an adjustable impulsive air-pressure to produce the vortex ring. Four synchronized smartphone cameras, of 40 Mpx each, are used to capture the location of particles from different viewing angles. We use red, green and blue LED's as backlighting sources, to capture particle locations at different times. The exposure time on the smartphone cameras are set to 2 seconds, while exposing each LED color for about 80 μs with different time steps that can go below 300 μs. The timing of these light pulses is controlled with a digital delay generator. The backlight is blocked by the instantaneous location of the particles in motion, leaving a shadow of the corresponding color for each time step. The image then is preprocessed to separate the 3 different color fields, before using the MART reconstruction and cross-correlation of the time steps to obtain the 3D-3C velocity field. This proof of concept experiment represents a possible low-cost Tomo-PIV setup.
Image quality evaluation of color displays using a Fovean color camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro
2007-03-01
This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Detection of hypertensive retinopathy using vessel measurements and textural features.
Agurto, Carla; Joshi, Vinayak; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
Features that indicate hypertensive retinopathy have been well described in the medical literature. This paper presents a new system to automatically classify subjects with hypertensive retinopathy (HR) using digital color fundus images. Our method consists of the following steps: 1) normalization and enhancement of the image; 2) determination of regions of interest based on automatic location of the optic disc; 3) segmentation of the retinal vasculature and measurement of vessel width and tortuosity; 4) extraction of color features; 5) classification of vessel segments as arteries or veins; 6) calculation of artery-vein ratios using the six widest (major) vessels for each category; 7) calculation of mean red intensity and saturation values for all arteries; 8) calculation of amplitude-modulation frequency-modulation (AM-FM) features for entire image; and 9) classification of features into HR and non-HR using linear regression. This approach was tested on 74 digital color fundus photographs taken with TOPCON and CANON retinal cameras using leave-one out cross validation. An area under the ROC curve (AUC) of 0.84 was achieved with sensitivity and specificity of 90% and 67%, respectively.
Farah, Ra'fat I
2016-01-01
The objectives of this in vitro study were: 1) to test the agreement among color coordinate differences and total color difference (ΔL*, ΔC*, Δh°, and ΔE) measurements obtained by digital image analysis (DIA) and spectrophotometer, and 2) to test the reliability of each method for obtaining color differences. A digital camera was used to record standardized images of each of the 15 shade tabs from the IPS e.max shade guide placed edge-to-edge in a phantom head with a reference shade tab. The images were analyzed using image-editing software (Adobe Photoshop) to obtain the color differences between the middle area of each test shade tab and the corresponding area of the reference tab. The color differences for the same shade tab areas were also measured using a spectrophotometer. To assess the reliability, measurements for the 15 shade tabs were repeated twice using the two methods. The Intraclass Correlation Coefficient (ICC) and the Dahlberg index were used to calculate agreement and reliability. The total agreement of the two methods for measuring ΔL*, ΔC*, Δh°, and ΔE, according to the ICC, exceeded 0.82. The Dahlberg indices for ΔL* and ΔE were 2.18 and 2.98, respectively. For the reliability calculation, the ICCs for the DIA and the spectrophotometer ΔE were 0.91 and 0.94, respectively. High agreement was obtained between the DIA and spectrophotometer results for the ΔL*, ΔC*, Δh°, and ΔE measurements. Further, the reliability of the measurements for the spectrophotometer was slightly higher than the reliability of all measurements in the DIA.
Misimi, E; Mathiassen, J R; Erikson, U
2007-01-01
Computer vision method was used to evaluate the color of Atlantic salmon (Salmo salar) fillets. Computer vision-based sorting of fillets according to their color was studied on 2 separate groups of salmon fillets. The images of fillets were captured using a digital camera of high resolution. Images of salmon fillets were then segmented in the regions of interest and analyzed in red, green, and blue (RGB) and CIE Lightness, redness, and yellowness (Lab) color spaces, and classified according to the Roche color card industrial standard. Comparisons of fillet color between visual evaluations were made by a panel of human inspectors, according to the Roche SalmoFan lineal standard, and the color scores generated from computer vision algorithm showed that there were no significant differences between the methods. Overall, computer vision can be used as a powerful tool to sort fillets by color in a fast and nondestructive manner. The low cost of implementing computer vision solutions creates the potential to replace manual labor in fish processing plants with automation.
A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera
NASA Astrophysics Data System (ADS)
Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin
2014-12-01
The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.
NASA Technical Reports Server (NTRS)
Wattson, R. B.; Harvey, P.; Swift, R.
1975-01-01
An intrinsic silicon charge injection device (CID) television sensor array has been used in conjunction with a CaMoO4 colinear tunable acousto optic filter, a 61 inch reflector, a sophisticated computer system, and a digital color TV scan converter/computer to produce near IR images of Saturn and Jupiter with 10A spectral resolution and approximately 3 inch spatial resolution. The CID camera has successfully obtained digitized 100 x 100 array images with 5 minutes of exposure time, and slow-scanned readout to a computer. Details of the equipment setup, innovations, problems, experience, data and final equipment performance limits are given.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Mustari, Afrina; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu; Kokubo, Yasuaki
2017-02-01
We propose a rapid imaging method to monitor the spatial distribution of total hemoglobin concentration (CHbT), the tissue oxygen saturation, and the scattering power b in the expression of μs'=aλ-b as the scattering parameters in cerebral cortex using a digital red-green-blue camera. In the method, the RGB-values are converted into the tristimulus values in CIEXYZ color space which is compatible with the common RGB working spaces. Monte Carlo simulation (MCS) for light transport in tissue is used to specify a relation among the tristimulus XYZ-values and the concentration of oxygenated hemoglobin, that of deoxygenated hemoglobin, and the scattering power b. In the present study, we performed sequential recordings of RGB images of in vivo exposed rat brain during the cortical spreading depolarization evoked by the topical application of KCl. Changes in the total hemoglobin concentration and the tissue oxygen saturation imply the temporary change in cerebral blood flow during CSD. Decrease in the scattering power b was observed before the profound increase in the total hemoglobin concentration, which is indicative of the reversible morphological changes in brain tissue during CSD. The results in this study indicate potential of the method to evaluate the pathophysiological conditions in brain tissue with a digital red-green-blue camera.
Magiati, Maria; Sevastou, Areti; Kalogianni, Despina P
2018-06-04
A fluorometric lateral flow assay has been developed for the detection of nucleic acids. The fluorophores phycoerythrin (PE) and fluorescein isothiocyanate (FITC) were used as labels, while a common digital camera and a colored vinyl-sheet, acting as a cut-off optical filter, are used for fluorescence imaging. After DNA amplification by polymerase chain reaction (PCR), the biotinylated PCR product is hybridized to its complementary probe that carries a poly(dA) tail at 3΄ edge and then applied to the lateral flow strip. The hybrids are captured to the test zone of the strip by immobilized poly(dT) sequences and detected by streptavidin-fluorescein and streptavidin-phycoerythrin conjugates, through streptavidin-biotin interaction. The assay is widely applicable, simple, cost-effective, and offers a large multiplexing potential. Its performance is comparable to assays based on the use of streptavidin-gold nanoparticles conjugates. As low as 7.8 fmol of a ssDNA and 12.5 fmol of an amplified dsDNA target were detectable. Graphical abstract Schematic presentation of a fluorometric lateral flow assay based on fluorescein and phycoerythrin fluorescent labels for the detection of single-stranded (ssDNA) and double-stranded DNA (dsDNA) sequences and using a digital camera readout. SA: streptavidin, BSA: Bovine Serum Albumin, B: biotin, FITC: fluorescein isothiocyanate, PE: phycoerythrin, TZ: test zone, CZ: control zone.
Seeing Earth Through the Eyes of an Astronaut
NASA Technical Reports Server (NTRS)
Dawson, Melissa
2014-01-01
The Human Exploration Science Office within the ARES Directorate has undertaken a new class of handheld camera photographic observations of the Earth as seen from the International Space Station (ISS). For years, astronauts have attempted to describe their experience in space and how they see the Earth roll by below their spacecraft. Thousands of crew photographs have documented natural features as diverse as the dramatic clay colors of the African coastline, the deep blues of the Earth's oceans, or the swirling Aurora Borealis of Australia in the upper atmosphere. Dramatic recent improvements in handheld digital single-lens reflex (DSLR) camera capabilities are now allowing a new field of crew photography: night time-lapse imagery.
Color filter array design based on a human visual model
NASA Astrophysics Data System (ADS)
Parmar, Manu; Reeves, Stanley J.
2004-05-01
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.
WhitebalPR: automatic white balance by polarized reflections
NASA Astrophysics Data System (ADS)
Fischer, Gregor; Kolbe, Karin; Sajjaa, Matthias
2008-02-01
This new color constancy method is based on the polarization degree of that light which is reflected at the surface of an object. The subtraction of at least two images taken under different polarization directions detects the polarization degree of the neutrally reflected portions and eliminates the remitted non-polarized colored portions. Two experiments have been designed to clarify the performance of the procedure, one to multicolored objects and another to objects of different surface characteristics. The results show that the mechanism of eliminating the remitted, non-polarized colored portions of light works very fine. Independent from its color, different color pigments seem to be suitable for measuring the color of the illumination. The intensity and also the polarization degree of the reflected light depend on the surface properties significantly. The results exhibit a high accuracy of measuring the color of the illumination for glossy and matt surfaces. Only strongly scattering surfaces account for a weak signal level of the difference image and a reduced accuracy. An embodiment is proposed to integrate the new method into digital cameras.
High Resolution Airborne Digital Imagery for Precision Agriculture
NASA Technical Reports Server (NTRS)
Herwitz, Stanley R.
1998-01-01
The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).
Novel computer-based endoscopic camera
NASA Astrophysics Data System (ADS)
Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia
1995-05-01
We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.
NASA Astrophysics Data System (ADS)
Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.
2013-09-01
A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.
Semantic classification of business images
NASA Astrophysics Data System (ADS)
Erol, Berna; Hull, Jonathan J.
2006-01-01
Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.
Accurate shade image matching by using a smartphone camera.
Tam, Weng-Kong; Lee, Hsi-Jian
2017-04-01
Dental shade matching by using digital images may be feasible when suitable color features are properly manipulated. Separating the color features into feature spaces facilitates favorable matching. We propose using support vector machines (SVM), which are outstanding classifiers, in shade classification. A total of 1300 shade tab images were captured using a smartphone camera with auto-mode settings and no flash. The images were shot at angled distances of 14-20cm from a shade guide at a clinic equipped with light tubes that produced a 4000K color temperature. The Group 1 samples comprised 1040 tab images, for which the shade guide was randomly positioned in the clinic, and the Group 2 samples comprised 260 tab images, for which the shade guide had a fixed position in the clinic. Rectangular content was cropped manually on each shade tab image and further divided into 10×2 blocks. The color features extracted from the blocks were described using a feature vector. The feature vectors in each group underwent SVM training and classification by using the "leave-one-out" strategy. The top one and three accuracies of Group 1 were 0.86 and 0.98, respectively, and those of Group 2 were 0.97 and 1.00, respectively. This study provides a feasible technique for dental shade classification that uses the camera of a mobile device. The findings reveal that the proposed SVM classification might outperform the shade-matching results of previous studies that have performed similarity measurements of ΔE levels or used an S, a*, b* feature set. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
Computer-aided analysis for the Mechanics of Granular Materials (MGM) experiment, part 2
NASA Technical Reports Server (NTRS)
Parker, Joey K.
1987-01-01
Computer vision based analysis for the MGM experiment is continued and expanded into new areas. Volumetric strains of granular material triaxial test specimens have been measured from digitized images. A computer-assisted procedure is used to identify the edges of the specimen, and the edges are used in a 3-D model to estimate specimen volume. The results of this technique compare favorably to conventional measurements. A simplified model of the magnification caused by diffraction of light within the water of the test apparatus was also developed. This model yields good results when the distance between the camera and the test specimen is large compared to the specimen height. An algorithm for a more accurate 3-D magnification correction is also presented. The use of composite and RGB (red-green-blue) color cameras is discussed and potentially significant benefits from using an RGB camera are presented.
Pancam: A Multispectral Imaging Investigation on the NASA 2003 Mars Exploration Rover Mission
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Dingizian, A.; Brown, D.; Morris, R. V.; Arneson, H. M.; Johnson, M. J.
2003-01-01
One of the six science payload elements carried on each of the NASA Mars Exploration Rovers (MER; Figure 1) is the Panoramic Camera System, or Pancam. Pancam consists of three major components: a pair of digital CCD cameras, the Pancam Mast Assembly (PMA), and a radiometric calibration target. The PMA provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. The calibration target provides a set of reference color and grayscale standards for calibration validation, and a shadow post for quantification of the direct vs. diffuse illumination of the scene. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360 of azimuth and from zenith to nadir, providing a complete view of the scene around the rover in up to 12 unique wavelengths. The major characteristics of Pancam are summarized.
Spectral imaging using consumer-level devices and kernel-based regression.
Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko
2016-06-01
Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong
2018-01-01
Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Wu, Jing; Dong, Mingling; Zhang, Cheng; Wang, Yu; Xie, Mengxia; Chen, Yiping
2017-06-05
Magnetic lateral flow strip (MLFS) based on magnetic bead (MB) and smart phone camera has been developed for quantitative detection of cocaine (CC) in urine samples. CC and CC-bovine serum albumin (CC-BSA) could competitively react with MB-antibody (MB-Ab) of CC on the surface of test line of MLFS. The color of MB-Ab conjugate on the test line relates to the concentration of target in the competition immunoassay format, which can be used as a visual signal. Furthermore, the color density of the MB-Ab conjugate can be transferred into digital signal (gray value) by a smart phone, which can be used as a quantitative signal. The linear detection range for CC is 5-500 ng/mL and the relative standard deviations are under 10%. The visual limit of detection was 5 ng/mL and the whole analysis time was within 10 min. The MLFS has been successfully employed for the detection of CC in urine samples without sample pre-treatment and the result is also agreed to that of enzyme-linked immunosorbent assay (ELISA). With the popularization of smart phone cameras, the MLFS has large potential in the detection of drug residues in virtue of its stability, speediness, and low-cost.
Using the iPhone as a device for a rapid quantitative analysis of trinitrotoluene in soil.
Choodum, Aree; Kanatharana, Proespichaya; Wongniramaikul, Worawit; Daeid, Niamh Nic
2013-10-15
Mobile 'smart' phones have become almost ubiquitous in society and are typically equipped with a high-resolution digital camera which can be used to produce an image very conveniently. In this study, the built-in digital camera of a smart phone (iPhone) was used to capture the results from a rapid quantitative colorimetric test for trinitrotoluene (TNT) in soil. The results were compared to those from a digital single-lens reflex (DSLR) camera. The colored product from the selective test for TNT was quantified using an innovative application of photography where the relationships between the Red Green Blue (RGB) values and the concentrations of colorimetric product were exploited. The iPhone showed itself to be capable of being used more conveniently than the DSLR while providing similar analytical results with increased sensitivity. The wide linear range and low detection limits achieved were comparable with those from spectrophotometric quantification methods. Low relative errors in the range of 0.4 to 6.3% were achieved in the analysis of control samples and 0.4-6.2% for spiked soil extracts with good precision (2.09-7.43% RSD) for the analysis over 4 days. The results demonstrate that the iPhone provides the potential to be used as an ideal novel platform for the development of a rapid on site semi quantitative field test for the analysis of explosives. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.
2014-12-01
Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E
2014-06-01
Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.
Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm
NASA Astrophysics Data System (ADS)
Gao, X.; Li, M.; Xing, L.; Liu, Y.
2018-04-01
Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Grayscale imbalance correction in real-time phase measuring profilometry
NASA Astrophysics Data System (ADS)
Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng
2016-10-01
Grayscale imbalance correction in real-time phase measuring profilometry (RPMP) is proposed. In the RPMP, the sufficient information is obtained to reconstruct the 3D shape of the measured object in one over twenty-four of a second. Only one color fringe pattern whose R, G and B channels are coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is sent to a flash memory on a specialized digital light projector (SDLP). And then the SDLP projects the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile a monochrome CCD camera captures the corresponding deformed patterns synchronously with the SDLP. Because the deformed patterns from three color channels are captured at different time, the color crosstalk is avoided completely. But due to the monochrome CCD camera's different spectral sensitivity to R, G and B tricolor, there will be grayscale imbalance among these deformed patterns captured at R, G and B channels respectively which may result in increasing measuring errors or even failing to reconstruct the 3D shape. So a new grayscale imbalance correction method based on least square method is developed. The experimental results verify the feasibility of the proposed method.
Multi-color pyrometry imaging system and method of operating the same
Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde
2017-03-21
A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.
NASA Technical Reports Server (NTRS)
Nelson, David L.; Diner, David J.; Thompson, Charles K.; Hall, Jeffrey R.; Rheingans, Brian E.; Garay, Michael J.; Mazzoni, Dominic
2010-01-01
MISR (Multi-angle Imaging SpectroRadiometer) INteractive eXplorer (MINX) is an interactive visualization program that allows a user to digitize smoke, dust, or volcanic plumes in MISR multiangle images, and automatically retrieve height and wind profiles associated with those plumes. This innovation can perform 9-camera animations of MISR level-1 radiance images to study the 3D relationships of clouds and plumes. MINX also enables archiving MISR aerosol properties and Moderate Resolution Imaging Spectroradiometer (MODIS) fire radiative power along with the heights and winds. It can correct geometric misregistration between cameras by correlating off-nadir camera scenes with corresponding nadir scenes and then warping the images to minimize the misregistration offsets. Plots of BRF (bidirectional reflectance factor) vs. camera angle for points clicked in an image can be displayed. Users get rapid access to map views of MISR path and orbit locations and overflight dates, and past or future orbits can be identified that pass over a specified location at a specified time. Single-camera, level-1 radiance data at 1,100- or 275- meter resolution can be quickly displayed in color using a browse option. This software determines the heights and motion vectors of features above the terrain with greater precision and coverage than previous methods, based on an algorithm that takes wind direction into consideration. Human interpreters can precisely identify plumes and their extent, and wind direction. Overposting of MODIS thermal anomaly data aids in the identification of smoke plumes. The software has been used to preserve graphical and textural versions of the digitized data in a Web-based database.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong
2005-01-01
Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.
Pluto and Charon in Color: Pluto-Centric View Animation
2015-06-11
The first color movies from NASA's New Horizons mission show Pluto and its largest moon, Charon, and the complex orbital dance of the two bodies, known as a double planet. A near-true color movie were assembled from images made in three colors -- blue, red and near-infrared -- by the Multispectral Visible Imaging Camera on the instrument known as Ralph. The images were taken on nine different occasions from May 29-June 3, 2015. The movie is "Pluto-centric," meaning that Charon is shown as it moves in relation to Pluto, which is digitally centered in the movie. (The North Pole of Pluto is at the top.) Pluto makes one turn around its axis every 6 days, 9 hours and 17.6 minutes-the same amount of time that Charon rotates in its orbit. Looking closely at the images in this movie, one can detect a regular shift in Pluto's brightness-due to the brighter and darker terrains on its differing faces. http://photojournal.jpl.nasa.gov/catalog/PIA19689
Real-time color image processing for forensic fiber investigations
NASA Astrophysics Data System (ADS)
Paulsson, Nils
1995-09-01
This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.
Optical properties (bidirectional reflectance distribution function) of shot fabric.
Lu, R; Koenderink, J J; Kappers, A M
2000-11-01
To study the optical properties of materials, one needs a complete set of the angular distribution functions of surface scattering from the materials. Here we present a convenient method for collecting a large set of bidirectional reflectance distribution function (BRDF) samples in the hemispherical scattering space. Material samples are wrapped around a right-circular cylinder and irradiated by a parallel light source, and the scattered radiance is collected by a digital camera. We tilted the cylinder around its center to collect the BRDF samples outside the plane of incidence. This method can be used with materials that have isotropic and anisotropic scattering properties. We demonstrate this method in a detailed investigation of shot fabrics. The warps and the fillings of shot fabrics are dyed different colors so that the fabric appears to change color at different viewing angles. These color-changing characteristics are found to be related to the physical and geometrical structure of shot fabric. Our study reveals that the color-changing property of shot fabrics is due mainly to an occlusion effect.
Automatic video segmentation and indexing
NASA Astrophysics Data System (ADS)
Chahir, Youssef; Chen, Liming
1999-08-01
Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.
Panretinal, high-resolution color photography of the mouse fundus.
Paques, Michel; Guyomard, Jean-Laurent; Simonutti, Manuel; Roux, Michel J; Picaud, Serge; Legargasson, Jean-François; Sahel, José-Alain
2007-06-01
To analyze high-resolution color photographs of the mouse fundus. A contact fundus camera based on topical endoscopy fundus imaging (TEFI) was built. Fundus photographs of C57 and Balb/c mice obtained by TEFI were qualitatively analyzed. High-resolution digital imaging of the fundus, including the ciliary body, was routinely obtained. The reflectance and contrast of retinal vessels varied significantly with the amount of incident and reflected light and, thus, with the degree of fundus pigmentation. The combination of chromatic and spherical aberration favored blue light imaging, in term of both field and contrast. TEFI is a small, low-cost system that allows high-resolution color fundus imaging and fluorescein angiography in conscious mice. Panretinal imaging is facilitated by the presence of the large rounded lens. TEFI significantly improves the quality of in vivo photography of retina and ciliary process of mice. Resolution is, however, affected by chromatic aberration, and should be improved by monochromatic imaging.
Joint demosaicking and zooming using moderate spectral correlation and consistent edge map
NASA Astrophysics Data System (ADS)
Zhou, Dengwen; Dong, Weiming; Chen, Wengang
2014-07-01
The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.
Digitized Photography: What You Can Do with It.
ERIC Educational Resources Information Center
Kriss, Jack
1997-01-01
Discusses benefits of digital cameras which allow users to take a picture, store it on a digital disk, and manipulate/export these photos to a print document, Web page, or multimedia presentation. Details features of digital cameras and discusses educational uses. A sidebar presents prices and other information for 12 digital cameras. (AEF)
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
3D Rainbow Particle Tracking Velocimetry
NASA Astrophysics Data System (ADS)
Aguirre-Pablo, Andres A.; Xiong, Jinhui; Idoughi, Ramzi; Aljedaani, Abdulrahman B.; Dun, Xiong; Fu, Qiang; Thoroddsen, Sigurdur T.; Heidrich, Wolfgang
2017-11-01
A single color camera is used to reconstruct a 3D-3C velocity flow field. The camera is used to record the 2D (X,Y) position and colored scattered light intensity (Z) from white polyethylene tracer particles in a flow. The main advantage of using a color camera is the capability of combining different intensity levels for each color channel to obtain more depth levels. The illumination system consists of an LCD projector placed perpendicularly to the camera. Different intensity colored level gradients are projected onto the particles to encode the depth position (Z) information of each particle, benefiting from the possibility of varying the color profiles and projected frequencies up to 60 Hz. Chromatic aberrations and distortions are estimated and corrected using a 3D laser engraved calibration target. The camera-projector system characterization is presented considering size and depth position of the particles. The use of these components reduces dramatically the cost and complexity of traditional 3D-PTV systems.
Spectrally-encoded color imaging
Kang, DongKyun; Yelin, Dvir; Bouma, Brett E.; Tearney, Guillermo J.
2010-01-01
Spectrally-encoded endoscopy (SEE) is a technique for ultraminiature endoscopy that encodes each spatial location on the sample with a different wavelength. One limitation of previous incarnations of SEE is that it inherently creates monochromatic images, since the spectral bandwidth is expended in the spatial encoding process. Here we present a spectrally-encoded imaging system that has color imaging capability. The new imaging system utilizes three distinct red, green, and blue spectral bands that are configured to illuminate the grating at different incident angles. By careful selection of the incident angles, the three spectral bands can be made to overlap on the sample. To demonstrate the method, a bench-top system was built, comprising a 2400-lpmm grating illuminated by three 525-μm-diameter beams with three different spectral bands. Each spectral band had a bandwidth of 75 nm, producing 189 resolvable points. A resolution target, color phantoms, and excised swine small intestine were imaged to validate the system's performance. The color SEE system showed qualitatively and quantitatively similar color imaging performance to that of a conventional digital camera. PMID:19688002
ERIC Educational Resources Information Center
Lancor, Rachael; Lancor, Brian
2014-01-01
In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…
Filters for Color Imaging and for Science
2013-03-18
The color cameras on NASA Mars rover Curiosity, including the pair that make up the rover Mastcam instrument, use the same type of Bayer pattern RGB filter as found in typical commercial color cameras.
NASA Astrophysics Data System (ADS)
Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.
2016-12-01
A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.
Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur
2018-01-01
Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.
Digital camera with apparatus for authentication of images produced from an image file
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1993-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.
Digital Camera with Apparatus for Authentication of Images Produced from an Image File
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1996-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.
SLR digital camera for forensic photography
NASA Astrophysics Data System (ADS)
Har, Donghwan; Son, Youngho; Lee, Sungwon
2004-06-01
Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.
Evaluation of modified portable digital camera for screening of diabetic retinopathy.
Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi
2009-01-01
To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.
NASA Astrophysics Data System (ADS)
Sarakinos, A.; Lembessis, A.; Zervos, N.
2013-02-01
In this paper we will present the Z-Lab transportable color holography system, the HoLoFoS illuminator and results of actual in situ recording of color Denisyuk holograms of artifacts on panchromatic silver halide emulsions. Z-lab and HoLoFoS were developed to meet identified prerequisites of holographic recording of artifacts: a) in situ recording b) a high degree of detail and color reproduction c) a low degree of image distortions. The Z-Lab consists of the Z3RGB camera, its accessories and a mobile darkroom. HoLoFoS is an RGB LED-based lighting device for the display of color holograms. The device is capable of digitally controlled intensity mixing and provides a beam of uniform color cross section. The small footprint and emission characteristics of the device LEDs result in a narrow band, quasi point source at selected wavelengths. A case study in recording and displaying 'Optical Clones' of Greek cultural heritage artifacts with the aforementioned systems will also be presented.
Linear array of photodiodes to track a human speaker for video recording
NASA Astrophysics Data System (ADS)
DeTone, D.; Neal, H.; Lougheed, R.
2012-12-01
Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.
Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems
D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman
1998-01-01
The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...
Objective assessment in digital images of skin erythema caused by radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsubara, H., E-mail: matubara@nirs.go.jp; Matsufuji, N.; Tsuji, H.
Purpose: Skin toxicity caused by radiotherapy has been visually classified into discrete grades. The present study proposes an objective and continuous assessment method of skin erythema in digital images taken under arbitrary lighting conditions, which is the case for most clinical environments. The purpose of this paper is to show the feasibility of the proposed method. Methods: Clinical data were gathered from six patients who received carbon beam therapy for lung cancer. Skin condition was recorded using an ordinary compact digital camera under unfixed lighting conditions; a laser Doppler flowmeter was used to measure blood flow in the skin. Themore » photos and measurements were taken at 3 h, 30, and 90 days after irradiation. Images were decomposed into hemoglobin and melanin colors using independent component analysis. Pixel values in hemoglobin color images were compared with skin dose and skin blood flow. The uncertainty of the practical photographic method was also studied in nonclinical experiments. Results: The clinical data showed good linearity between skin dose, skin blood flow, and pixel value in the hemoglobin color images; their correlation coefficients were larger than 0.7. It was deduced from the nonclinical that the uncertainty due to the proposed method with photography was 15%; such an uncertainty was not critical for assessment of skin erythema in practical use. Conclusions: Feasibility of the proposed method for assessment of skin erythema using digital images was demonstrated. The numerical relationship obtained helped to predict skin erythema by artificial processing of skin images. Although the proposed method using photographs taken under unfixed lighting conditions increased the uncertainty of skin information in the images, it was shown to be powerful for the assessment of skin conditions because of its flexibility and adaptability.« less
Design of video interface conversion system based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Heng; Wang, Xiang-jun
2014-11-01
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
IPL Processing of the Viking Orbiter Images of Mars
NASA Technical Reports Server (NTRS)
Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.
1977-01-01
The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.
NASA Technical Reports Server (NTRS)
Taranik, James V.; Hutsinpiller, Amy; Borengasser, Marcus
1986-01-01
Thermal Infrared Multispectral Scanner (TIMS) data were acquired over the Virginia City area on September 12, 1984. The data were acquired at approximately 1130 hours local time (1723 IRIG). The TIMS data were analyzed using both photointerpretation and digital processing techniques. Karhuen-Loeve transformations were utilized to display variations in radiant spectral emittance. The TIMS image data were compared with color infrared metric camera photography, LANDSAT Thematic Mapper (TM) data, and key areas were photographed in the field.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.
Choodum, Aree; Parabun, Kaewalee; Klawach, Nantikan; Daeid, Niamh Nic; Kanatharana, Proespichaya; Wongniramaikul, Worawit
2014-02-01
The Simon presumptive color test was used in combination with the built-in digital camera on a mobile phone to detect methamphetamine. The real-time Red-Green-Blue (RGB) basic color data was obtained using an application installed on the mobile phone and the relationship profile between RGB intensity, including other calculated values, and the colourimetric product was investigated. A wide linear range (0.1-2.5mg mL(-1)) and a low detection limit (0.0110±0.0001-0.044±0.002mg mL(-1)) were achieved. The method also required a small sample size (20μL). The results obtained from the analysis of illicit methamphetamine tablets were comparable to values obtained from gas chromatograph-flame ionization detector (GC-FID) analysis. Method validation indicated good intra- and inter-day precision (2.27-4.49%RSD and 2.65-5.62%RSD, respectively). The results suggest that this is a powerful real-time mobile method with the potential to be applied in field tests. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Spectroscopy and optical imaging of coalescing droplets
NASA Astrophysics Data System (ADS)
Ivanov, Maksym; Viderström, Michel; Chang, Kelken; Ramírez Contreras, Claudia; Mehlig, Bernhard; Hanstorp, Dag
2016-09-01
We report on experimental investigations of the dynamics of colliding liquid droplets by combining optical trapping, spectroscopy and high-speed color imaging. Two droplets with diameters between 5 and 50 microns are suspended in quiescent air by optical traps. The traps allows us to control the initial positions, and hence the impact parameter and the relative velocity of the colliding droplets. Movies of the droplet dynamics are recorded using high-speed digital movie cameras at a frame rate of up to 63000 frames per second. A fluorescent dye is added to one of the colliding droplets. We investigate the temporal evolution of the scattered and fluorescence light from the colliding droplets with concurrent spectroscopy and color imaging. This technique can be used to detect the exchange of molecules between a pair of neutral or charged droplets.
Video systems for real-time oil-spill detection
NASA Technical Reports Server (NTRS)
Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.
1973-01-01
Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
Measurement of soil color: a comparison between smartphone camera and the Munsell color charts
USDA-ARS?s Scientific Manuscript database
Soil color is one of the most valuable soil properties for assessing and monitoring soil health. Here we present the results of tests of a new soil color app for mobile phones. The comparisons include various smartphones cameras under different natural illumination conditions (sunny and cloudy) and ...
ERIC Educational Resources Information Center
Liu, Rong; Unger, John A.; Scullion, Vicki A.
2014-01-01
Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…
Teichmann, A Lina; Nieuwenstein, Mark R; Rich, Anina N
2015-01-01
Digit-color synesthetes report experiencing colors when perceiving letters and digits. The conscious experience is typically unidirectional (e.g., digits elicit colors but not vice versa) but recent evidence shows subtle bidirectional effects. We examined whether short-term memory for colors could be affected by the order of presentation reflecting more or less structure in the associated digits. We presented a stream of colored squares and asked participants to report the colors in order. The colors matched each synesthete's colors for digits 1-9 and the order of the colors corresponded either to a sequence of numbers (e.g., [red, green, blue] if 1 = red, 2 = green, 3 = blue) or no systematic sequence. The results showed that synesthetes recalled sequential color sequences more accurately than pseudo-randomized colors, whereas no such effect was found for the non-synesthetic controls. Synesthetes did not differ from non-synesthetic controls in recall of color sequences overall, providing no evidence of a general advantage in memory for serial recall of colors.
Analysis of crystalline lens coloration using a black and white charge-coupled device camera.
Sakamoto, Y; Sasaki, K; Kojima, M
1994-01-01
To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.
Full-color stereoscopic single-pixel camera based on DMD technology
NASA Astrophysics Data System (ADS)
Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús
2017-02-01
Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.
Next-generation digital camera integration and software development issues
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Peters, Ken; Hecht, Richard
1998-04-01
This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.
Moonrungsee, Nuntaporn; Pencharee, Somkid; Jakmunee, Jaroon
2015-05-01
A field deployable colorimetric analyzer based on an "Android mobile phone" was developed for the determination of available phosphorus content in soil. An inexpensive mobile phone embedded with digital camera was used for taking photograph of the chemical solution under test. The method involved a reaction of the phosphorus (orthophosphate form), ammonium molybdate and potassium antimonyl tartrate to form phosphomolybdic acid which was reduced by ascorbic acid to produce the intense colored molybdenum blue. The software program was developed to use with the phone for recording and analyzing RGB color of the picture. A light tight box with LED light to control illumination was fabricated to improve precision and accuracy of the measurement. Under the optimum conditions, the calibration graph was created by measuring blue color intensity of a series of standard phosphorus solution (0.0-1.0mgPL(-1)), then, the calibration equation obtained was retained by the program for the analysis of sample solution. The results obtained from the proposed method agreed well with the spectrophotometric method, with a detection limit of 0.01mgPL(-1) and a sample throughput about 40h(-1) was achieved. The developed system provided good accuracy (RE<5%) and precision (RSD<2%, intra- and inter-day), fast and cheap analysis, and especially convenient to use in crop field for soil analysis of phosphorus nutrient. Copyright © 2015 Elsevier B.V. All rights reserved.
Temperature-Sensitive Coating Sensor Based on Hematite
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.
2011-01-01
A temperature-sensitive coating, based on hematite (iron III oxide), has been developed to measure surface temperature using spectral techniques. The hematite powder is added to a binder that allows the mixture to be painted on the surface of a test specimen. The coating dynamically changes its relative spectral makeup or color with changes in temperature. The color changes from a reddish-brown appearance at room temperature (25 C) to a black-gray appearance at temperatures around 600 C. The color change is reversible and repeatable with temperature cycling from low to high and back to low temperatures. Detection of the spectral changes can be recorded by different sensors, including spectrometers, photodiodes, and cameras. Using a-priori information obtained through calibration experiments in known thermal environments, the color change can then be calibrated to yield accurate quantitative temperature information. Temperature information can be obtained at a point, or over an entire surface, depending on the type of equipment used for data acquisition. Because this innovation uses spectrophotometry principles of operation, rather than the current methods, which use photoluminescence principles, white light can be used for illumination rather than high-intensity short wavelength excitation. The generation of high-intensity white (or potentially filtered long wavelength light) is much easier, and is used more prevalently for photography and video technologies. In outdoor tests, the Sun can be used for short durations as an illumination source as long as the amplitude remains relatively constant. The reflected light is also much higher in intensity than the emitted light from the inefficient current methods. Having a much brighter surface allows a wider array of detection schemes and devices. Because color change is the principle of operation, the development of high-quality, lower-cost digital cameras can be used for detection, as opposed to the high-cost imagers needed for intensity measurements with the current methods. Alternative methods of detection are possible to increase the measurement sensitivity. For example, a monochrome camera can be used with an appropriate filter and a radiometric measurement of normalized intensity change that is proportional to the change coating temperature. Using different spectral regions yields different sensitivities and calibration curves for converting intensity change to temperature units. Alternatively, using a color camera, a ratio of the standard red, green, and blue outputs can be used as a self-referenced change. The blue region (less than 500 nm) does not change nearly as much as the red region (greater than 575 nm), so a ratio of color intensities will yield a calibrated temperature image. The new temperature sensor coating is easy to apply, is inexpensive, can contour complex shape surfaces, and can be a global surface measurement system based on spectrophotometry. The color change, or relative intensity change, at different colors makes the optical detection under white light illumination, and associated interpretation, much easier to measure and interpret than in the detection systems of the current methods.
Applications of color machine vision in the agricultural and food industries
NASA Astrophysics Data System (ADS)
Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.
1999-01-01
Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.
Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.
ERIC Educational Resources Information Center
Mills, David A.; Kelley, Kevin; Jones, Michael
2001-01-01
Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)
Digital Cameras for Student Use.
ERIC Educational Resources Information Center
Simpson, Carol
1997-01-01
Describes the features, equipment and operations of digital cameras and compares three different digital cameras for use in education. Price, technology requirements, features, transfer software, and accessories for the Kodak DC25, Olympus D-200L and Casio QV-100 are presented in a comparison table. (AEF)
High performance gel imaging with a commercial single lens reflex camera
NASA Astrophysics Data System (ADS)
Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.
2011-03-01
A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.
2009-01-01
The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.
Theoretical colours and isochrones for some Hubble Space Telescope colour systems. II
NASA Technical Reports Server (NTRS)
Paltoglou, G.; Bell, R. A.
1991-01-01
A grid of synthetic surface brightness magnitudes for 14 bandpasses of the Hubble Space Telescope Faint Object Camera is presented, as well as a grid of UBV, uvby, and Faint Object Camera surface brightness magnitudes derived from the Gunn-Stryker spectrophotometric atlas. The synthetic colors are used to examine the transformations between the ground-based Johnson UBV and Stromgren uvby systems and the Faint Object Camera UBV and uvby. Two new four-color systems, similar to the Stromgren system, are proposed for the determination of abundance, temperature, and surface gravity. The synthetic colors are also used to calculate color-magnitude isochrones from the list of theoretical tracks provided by VandenBerg and Bell (1990). It is shown that by using the appropriate filters it is possible to minimize the dependence of this color difference on metallicity. The effects of interstellar reddening on various Faint Object Camera colors are analyzed as well as the observational requirements for obtaining data of a given signal-to-noise for each of the 14 bandpasses.
Li, Junfeng; Wan, Xiaoxia
2018-01-15
To enrich the contents of digital archive and to guide the copy and restoration of colored relics, non-invasive methods for extraction of painting boundary and identification of pigment composition are proposed in this study based on the visible spectral images of colored relics. Superpixel concept is applied for the first time to the field of oversegmentation of visible spectral images and implemented on the visible spectral images of colored relics to extract their painting boundary. Since different pigments are characterized by their own spectrum and the same kind of pigment has the similar geometric profile in spectrum, an automatic identification method is established by comparing the proximity between the geometric profiles of the unknown spectrum from each superpixel and the pre-known spectrum from a deliberately prepared database. The methods are validated using the visible spectral images of the ancient wall paintings in Mogao Grottoes. By the way, the visible spectral images are captured by a multispectral imaging system consisting of two broadband filters and a RGB camera with high spatial resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
High Speed Digital Camera Technology Review
NASA Technical Reports Server (NTRS)
Clements, Sandra D.
2009-01-01
A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.
Recurring Lineae on Slopes at Hale Crater, Mars
2015-09-28
Dark, narrow streaks on Martian slopes such as these at Hale Crater are inferred to be formed by seasonal flow of water on contemporary Mars. The streaks are roughly the length of a football field. The imaging and topographical information in this processed, false-color view come from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. These dark features on the slopes are called "recurring slope lineae" or RSL. Planetary scientists using observations with the Compact Reconnaissance Imaging Spectrometer on the same orbiter detected hydrated salts on these slopes at Hale Crater, corroborating the hypothesis that the streaks are formed by briny liquid water. The image was produced by first creating a 3-D computer model (a digital terrain map) of the area based on stereo information from two HiRISE observations, and then draping a false-color image over the land-shape model. The vertical dimension is exaggerated by a factor of 1.5 compared to horizontal dimensions. The camera records brightness in three wavelength bands: infrared, red and blue-green. The draped image is one product from HiRISE observation ESP_03070_1440. http://photojournal.jpl.nasa.gov/catalog/PIA19916
NASA Astrophysics Data System (ADS)
Dorey, C. K.; Ebenstein, David B.
1988-10-01
Subcellular localization of multiple biochemical markers is readily achieved through their characteristic autofluorescence or through use of appropriately labelled antibodies. Recent development of specific probes has permitted elegant studies in calcium and pH in living cells. However, each of these methods measured fluorescence at one wavelength; precise quantitation of multiple fluorophores at individual sites within a cell has not been possible. Using DIFM, we have achieved spectral analysis of discrete subcellular particles 1-2 gm in diameter. The fluorescence emission is broken into narrow bands by an interference monochromator and visualized through the combined use of a silicon intensified target (SIT) camera, a microcomputer based framegrabber with 8 bit resolution, and a color video monitor. Image acquisition, processing, analysis and display are under software control. The digitized image can be corrected for the spectral distortions induced by the wavelength dependent sensitivity of the camera, and the displayed image can be enhanced or presented in pseudocolor to facilitate discrimination of variation in pixel intensity of individual particles. For rapid comparison of the fluorophore composition of granules, a ratio image is produced by dividing the image captured at one wavelength by that captured at another. In the resultant ratio image, a granule which has a fluorophore composition different from the majority is selectively colored. This powerful system has been utilized to obtain spectra of endogenous autofluorescent compounds in discrete cellular organelles of human retinal pigment epithelium, and to measure immunohistochemically labelled components of the extracellular matrix associated with the human optic nerve.
NASA Technical Reports Server (NTRS)
Leptuch, Peter A.
2002-01-01
The flow phenomena of buoyant jets have been analyzed by many researchers in recent years. Few, however have studied jets in microgravity conditions, and the exact nature of the flow under these conditions has until recently been unknown. This study seeks to extend the work done by researchers at the university of Oklahoma in examining and documenting the behavior of helium jets in micro-gravity conditions. Quantitative rainbow schlieren deflectometry data have been obtained for helium jets discharging vertically into quiescent ambient air from tubes of several diameters at various flow rates using a high-speed digital camera. These data have obtained before, during and after the onset of microgravity conditions. High-speed rainbow schlieren deflectometry has been developed for this study with the installation and use of a high-speed digital camera and modifications to the optical setup. Higher temporal resolution of the transitional phase between terrestrial and micro-gravity conditions has been obtained which has reduced the averaging effect of longer exposure times used in all previous schlieren studies. Results include color schlieren images, color time-space images (temporal evolution images), frequency analyses, contour plots of hue and contour plots of helium mole fraction. The results, which focus primarily on the periods before and during the onset of microgravity conditions, show that the pulsation of the jets normally found in terrestrial gravity ("earth"-gravity) conditions cease, and the gradients in helium diminish to produce a widening of the jet in micro-gravity conditions. In addition, the results show that the disturbance propagate upstream from a downstream source.
Camera Ready: Capturing a Digital History of Chester
ERIC Educational Resources Information Center
Lehman, Kathy
2008-01-01
Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD.…
Mastcam Special Filters Help Locate Variations Ahead
2017-11-01
This pair of images from the Mast Camera (Mastcam) on NASA's Curiosity rover illustrates how special filters are used to scout terrain ahead for variations in the local bedrock. The upper panorama is in the Mastcam's usual full color, for comparison. The lower panorama of the same scene, in false color, combines three exposures taken through different "science filters," each selecting for a narrow band of wavelengths. Filters and image processing steps were selected to make stronger signatures of hematite, an iron-oxide mineral, evident as purple. Hematite is of interest in this area of Mars -- partway up "Vera Rubin Ridge" on lower Mount Sharp -- as holding clues about ancient environmental conditions under which that mineral originated. In this pair of panoramas, the strongest indications of hematite appear related to areas where the bedrock is broken up. With information from this Mastcam reconnaissance, the rover team selected destinations in the scene for close-up investigations to gain understanding about the apparent patchiness in hematite spectral features. The Mastcam's left-eye camera took the component images of both panoramas on Sept. 12, 2017, during the 1,814th Martian day, or sol, of Curiosity's work on Mars. The view spans from south-southeast on the left to south-southwest on the right. The foreground across the bottom of the scene is about 50 feet (about 15 meters) wide. Figure 1 includes scale bars of 1 meter (3.3 feet) in the middle distance and 5 meters (16 feet) at upper right. Curiosity's Mastcam combines two cameras: the right eye with a telephoto lens and the left eye with a wider-angle lens. Each camera has a filter wheel that can be rotated in front of the lens for a choice of eight different filters. One filter for each camera is clear to all visible light, for regular full-color photos, and another is specifically for viewing the Sun. Some of the other filters were selected to admit wavelengths of light that are useful for identifying iron minerals. Each of the filters used for the lower panorama shown here admits light from a narrow band of wavelengths, extending to only about 5 to 10 nanometers longer or shorter than the filter's central wavelength. The three observations combined into this product used filters centered at three near-infrared wavelengths: 751 nanometers, 867 nanometers and 1,012 nanometers. Hematite distinctively absorbs some frequencies of infrared light more than others. Usual color photographs from digital cameras -- such as the upper panorama here from Mastcam -- combine information from red, green and blue filtering. The filters are in a microscopic grid in a "Bayer" filter array situated directly over the detector behind the lens, with wider bands of wavelengths. The colors of the upper panorama, as with most featured images from Mastcam, have been tuned with a color adjustment similar to white balancing for approximating how the rocks and sand would appear under daytime lighting conditions on Earth. https://photojournal.jpl.nasa.gov/catalog/PIA22065
Color image analysis of contaminants and bacteria transport in porous media
NASA Astrophysics Data System (ADS)
Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric
1997-10-01
Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.
Color calibration of an RGB camera mounted in front of a microscope with strong color distortion.
Charrière, Renée; Hébert, Mathieu; Trémeau, Alain; Destouches, Nathalie
2013-07-20
This paper aims at showing that performing color calibration of an RGB camera can be achieved even in the case where the optical system before the camera introduces strong color distortion. In the present case, the optical system is a microscope containing a halogen lamp, with a nonuniform irradiance on the viewed surface. The calibration method proposed in this work is based on an existing method, but it is preceded by a three-step preprocessing of the RGB images aiming at extracting relevant color information from the strongly distorted images, taking especially into account the nonuniform irradiance map and the perturbing texture due to the surface topology of the standard color calibration charts when observed at micrometric scale. The proposed color calibration process consists first in computing the average color of the color-chart patches viewed under the microscope; then computing white balance, gamma correction, and saturation enhancement; and finally applying a third-order polynomial regression color calibration transform. Despite the nonusual conditions for color calibration, fairly good performance is achieved from a 48 patch Lambertian color chart, since an average CIE-94 color difference on the color-chart colors lower than 2.5 units is obtained.
NASA Astrophysics Data System (ADS)
Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.
2003-07-01
We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.
NASA Astrophysics Data System (ADS)
Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing
2017-01-01
This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.
Miniaturized unified imaging system using bio-inspired fluidic lens
NASA Astrophysics Data System (ADS)
Tsai, Frank S.; Cho, Sung Hwan; Qiao, Wen; Kim, Nam-Hyong; Lo, Yu-Hwa
2008-08-01
Miniaturized imaging systems have become ubiquitous as they are found in an ever-increasing number of devices, such as cellular phones, personal digital assistants, and web cameras. Until now, the design and fabrication methodology of such systems have not been significantly different from conventional cameras. The only established method to achieve focusing is by varying the lens distance. On the other hand, the variable-shape crystalline lens found in animal eyes offers inspiration for a more natural way of achieving an optical system with high functionality. Learning from the working concepts of the optics in the animal kingdom, we developed bio-inspired fluidic lenses for a miniature universal imager with auto-focusing, macro, and super-macro capabilities. Because of the enormous dynamic range of fluidic lenses, the miniature camera can even function as a microscope. To compensate for the image quality difference between the central vision and peripheral vision and the shape difference between a solid-state image sensor and a curved retina, we adopted a hybrid design consisting of fluidic lenses for tunability and fixed lenses for aberration and color dispersion correction. A design of the world's smallest surgical camera with 3X optical zoom capabilities is also demonstrated using the approach of hybrid lenses.
NASA Astrophysics Data System (ADS)
Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.
2012-08-01
10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.
3D Reconstruction of Static Human Body with a Digital Camera
NASA Astrophysics Data System (ADS)
Remondino, Fabio
2003-01-01
Nowadays the interest in 3D reconstruction and modeling of real humans is one of the most challenging problems and a topic of great interest. The human models are used for movies, video games or ergonomics applications and they are usually created with 3D scanner devices. In this paper a new method to reconstruct the shape of a static human is presented. Our approach is based on photogrammetric techniques and uses a sequence of images acquired around a standing person with a digital still video camera or with a camcorder. First the images are calibrated and orientated using a bundle adjustment. After the establishment of a stable adjusted image block, an image matching process is performed between consecutive triplets of images. Finally the 3D coordinates of the matched points are computed with a mean accuracy of ca 2 mm by forward ray intersection. The obtained point cloud can then be triangulated to generate a surface model of the body or a virtual human model can be fitted to the recovered 3D data. Results of the 3D human point cloud with pixel color information are presented.
Single chip camera active pixel sensor
NASA Technical Reports Server (NTRS)
Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)
2003-01-01
A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.
Dropping In a Microgravity Environment (DIME) contest
NASA Technical Reports Server (NTRS)
2001-01-01
The first NASA Dropping In a Microgravity Environment (DIME) student competition pilot project came to a conclusion at the Glenn Research Center in April 2001. The competition involved high-school student teams who developed the concept for a microgravity experiment and prepared an experiment proposal. The two student teams - COSI Academy, sponsored by the Columbus Center of Science and Industry, and another team from Cincinnati, Ohio's Sycamore High School, designed a microgravity experiment, fabricated the experimental apparatus, and visited NASA Glenn to operate their experiment in the 2.2 Second Drop Tower. Here Carol Hodanbosi of the National Center for Microgravity Research and Jose Carrion, a lab mechanic with AKAC, prepare a student experiment package (inside the silver-colored frame) inside the orange-colored drag shield that encloses all experiment hardware. This image is from a digital still camera; higher resolution is not available.
2001-04-26
The first NASA Dropping In a Microgravity Environment (DIME) student competition pilot project came to a conclusion at the Glenn Research Center in April 2001. The competition involved high-school student teams who developed the concept for a microgravity experiment and prepared an experiment proposal. The two student teams - COSI Academy, sponsored by the Columbus Center of Science and Industry, and another team from Cincinnati, Ohio's Sycamore High School, designed a microgravity experiment, fabricated the experimental apparatus, and visited NASA Glenn to operate their experiment in the 2.2 Second Drop Tower. Here Carol Hodanbosi of the National Center for Microgravity Research and Jose Carrion, a lab mechanic with AKAC, prepare a student experiment package (inside the silver-colored frame) inside the orange-colored drag shield that encloses all experiment hardware. This image is from a digital still camera; higher resolution is not available.
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Calibration Image of Earth by Mars Color Imager
NASA Technical Reports Server (NTRS)
2005-01-01
Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils. The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results. The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles). This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data from the camera were being transmitted in real time to the Deep Space Network antennas in Goldstone, California.Using Digital Imaging in Classroom and Outdoor Activities.
ERIC Educational Resources Information Center
Thomasson, Joseph R.
2002-01-01
Explains how to use digital cameras and related basic equipment during indoor and outdoor activities. Uses digital imaging in general botany class to identify unknown fungus samples. Explains how to select a digital camera and other necessary equipment. (YDS)
A color-communication scheme for digital imagery
Acosta, Alex
1987-01-01
Color pictures generated from digital images are frequently used by geologists, foresters, range managers, and others. These color products are preferred over black and white pictures because the human eye is more sensitive to color differences than to various shades of gray. Color discrimination is a function of perception, and therefore colors in these color composites are generally described subjectively, which can lead to ambiguous color communication. Numerous color-coordinate systems are available that quantitively relate digital triplets representing amounts of red, free, and blue to the parameters of hue, saturation, and intensity perceived by the eye. Most of these systems implement a complex transformation of the primary colors to a color space that is hard to visualize, thus making it difficult to relate digital triplets to perception parameters. This paper presents a color-communcation scheme that relates colors on a color triangle to corresponding values of "hue" (H), "saturation" (S), and chromaticity coordinates (x,y,z). The scheme simplifies the relation between red, green, and blue (RGB) digital triplets and the color generated by these triplets. Some examples of the use of the color-communication scheme in digital image processing are presented.
Issues in implementing services for a wireless web-enabled digital camera
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas
2001-05-01
The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
NASA Astrophysics Data System (ADS)
Li, C.; Li, F.; Liu, Y.; Li, X.; Liu, P.; Xiao, B.
2012-07-01
Building 3D reconstruction based on ground remote sensing data (image, video and lidar) inevitably faces the problem that buildings are always occluded by vegetation, so how to automatically remove and repair vegetation occlusion is a very important preprocessing work for image understanding, compute vision and digital photogrammetry. In the traditional multispectral remote sensing which is achieved by aeronautics and space platforms, the Red and Near-infrared (NIR) bands, such as NDVI (Normalized Difference Vegetation Index), are useful to distinguish vegetation and clouds, amongst other targets. However, especially in the ground platform, CIR (Color Infra Red) is little utilized by compute vision and digital photogrammetry which usually only take true color RBG into account. Therefore whether CIR is necessary for vegetation segmentation or not has significance in that most of close-range cameras don't contain such NIR band. Moreover, the CIE L*a*b color space, which transform from RGB, seems not of much interest by photogrammetrists despite its powerfulness in image classification and analysis. So, CIE (L, a, b) feature and support vector machine (SVM) is suggested for vegetation segmentation to substitute for CIR. Finally, experimental results of visual effect and automation are given. The conclusion is that it's feasible to remove and segment vegetation occlusion without NIR band. This work should pave the way for texture reconstruction and repair for future 3D reconstruction.
Transient full-field vibration measurement using spectroscopical stereo photogrammetry.
Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan
2010-12-20
Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
SPRUCE Vegetation Phenology in Experimental Plots from Phenocam Imagery, 2015-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, Andrew D.; Hufkens, Koen; Milliman, Thomas
This data set consists of PhenoCam data from the SPRUCE experiment from the beginning of whole ecosystem warming in August 2015 through the end of 2017. Digital cameras, or phenocams, installed in each SPRUCE enclosure track seasonal variation in vegetation “greenness”, a proxy for vegetation phenology and associated physiological activity. Regions of interest (ROIs) were defined for vegetation types (1) Picea trees (EN, evergreen needleleaf); (2) Larix trees (DN, deciduous needleleaf); and (3) the mixed shrub layer (SH, shrubs). This data set consists of two sets of data files: (1) standard “3-day summary product files” for each camera and eachmore » ROI (i.e. vegetation type), characterizing vegetation color at a 3-day time step and (2) a “transition date file” containing the estimated “greenness rising” (spring) and “greenness falling” (autumn) transition dates.« less
Single-shot dual-wavelength in-line and off-axis hybrid digital holography
NASA Astrophysics Data System (ADS)
Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie
2018-02-01
We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.
Digital ocular fundus imaging: a review.
Bernardes, Rui; Serranho, Pedro; Lobo, Conceição
2011-01-01
Ocular fundus imaging plays a key role in monitoring the health status of the human eye. Currently, a large number of imaging modalities allow the assessment and/or quantification of ocular changes from a healthy status. This review focuses on the main digital fundus imaging modality, color fundus photography, with a brief overview of complementary techniques, such as fluorescein angiography. While focusing on two-dimensional color fundus photography, the authors address the evolution from nondigital to digital imaging and its impact on diagnosis. They also compare several studies performed along the transitional path of this technology. Retinal image processing and analysis, automated disease detection and identification of the stage of diabetic retinopathy (DR) are addressed as well. The authors emphasize the problems of image segmentation, focusing on the major landmark structures of the ocular fundus: the vascular network, optic disk and the fovea. Several proposed approaches for the automatic detection of signs of disease onset and progression, such as microaneurysms, are surveyed. A thorough comparison is conducted among different studies with regard to the number of eyes/subjects, imaging modality, fundus camera used, field of view and image resolution to identify the large variation in characteristics from one study to another. Similarly, the main features of the proposed classifications and algorithms for the automatic detection of DR are compared, thereby addressing computer-aided diagnosis and computer-aided detection for use in screening programs. Copyright © 2011 S. Karger AG, Basel.
An approach of characterizing the degree of spatial color mixture
NASA Astrophysics Data System (ADS)
Chu, Miao; Tian, Shaohui
2017-07-01
The digital camouflage technology arranges different color mosaics according to a certain rules, compared with traditional camouflage, it has more outstanding results deal with different distance reconnaissance. The better result of digital camouflage is mainly attributed to spatial color mixture, and is also a key factor to improve digital camouflage design. However, the research of space color mixture is relatively lack, cannot provide inadequate support for digital camouflage design. Therefore, according to the process of spatial color mixture, this paper proposes an effective parameter, spatial-color-mixture ratio, to characterize the degree of spatial color mixture. The experimental results show that spatial-color-mixture ratio is feasible and effective in practice, which could provide a new direction for further research on digital camouflage.
Measurement of luminance noise and chromaticity noise of LCDs with a colorimeter and a color camera
NASA Astrophysics Data System (ADS)
Roehrig, H.; Dallas, W. J.; Krupinski, E. A.; Redford, Gary R.
2007-09-01
This communication focuses on physical evaluation of image quality of displays for applications in medical imaging. In particular we were interested in luminance noise as well as chromaticity noise of LCDs. Luminance noise has been encountered in the study of monochrome LCDs for some time, but chromaticity noise is a new type of noise which has been encountered first when monochrome and color LCDs were compared in an ROC study. In this present study one color and one monochrome 3 M-pixel LCDs were studied. Both were DICOM calibrated with equal dynamic range. We used a Konica Minolta Chroma Meter CS-200 as well as a Foveon color camera to estimate luminance and chrominance variations of the displays. We also used a simulation experiment to estimate luminance noise. The measurements with the colorimeter were consistent. The measurements with the Foveon color camera were very preliminary as color cameras had never been used for image quality measurements. However they were extremely promising. The measurements with the colorimeter and the simulation results showed that the luminance and chromaticity noise of the color LCD were larger than that of the monochrome LCD. Under the condition that an adequate calibration method and image QA/QC program for color displays are available, we expect color LCDs may be ready for radiology in very near future.
Center for Coastline Security Technology, Year 3
2008-05-01
Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection
HOMER: the Holographic Optical Microscope for Education and Research
NASA Astrophysics Data System (ADS)
Luviano, Anali
Holography was invented in 1948 by Dennis Gabor and has undergone major advancements since the 2000s leading to the development of commercial digital holographic microscopes (DHM). This noninvasive form of microscopy produces a three-dimensional (3-D) digital model of a sample without altering or destroying the sample, thus allowing the same sample to be studied multiple times. HOMER-the Holographic Optical Microscope for Education and Research-produces a 3-D image from a two-dimensional (2-D) interference pattern captured by a camera that is then put through reconstruction software. This 2-D pattern is created when a reference wave interacts with the sample to produce a secondary wave that interferes with the unaltered part of the reference wave. I constructed HOMER to be an efficient, portable in-line DHM using inexpensive material and free reconstruction software. HOMER uses three different-colored LEDs as light sources. I am testing the performance of HOMER with the goal of producing tri-color images of samples. I'm using small basic biological samples to test the effectiveness of HOMER and plan to transition to complex cellular and biological specimens as I pursue my interest in biophysics. Norwich University.
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Chang, Yi-Wei; Li, Hau-Wei
2012-08-01
Full-field chromatic confocal surface profilometry employing a digital micromirror device (DMD) for spatial correspondence is proposed to minimize lateral cross-talks between individual detection sensors. Although full-field chromatic confocal profilometry is capable of enhancing measurement efficiency by completely removing time-consuming vertical scanning operation, its vertical measurement resolution and accuracy are still severely affected by the potential sensor lateral cross-talk problem. To overcome this critical bottleneck, a DMD-based chromatic confocal method is developed by employing a specially-designed objective for chromatic light dispersion, and a DMD for lateral pixel correspondence and scanning, thereby reducing the lateral cross-talk influence. Using the chromatic objective, the incident light is dispersed according to a pre-designed detection range of several hundred micrometers, and a full-field reflected light is captured by a three-chip color camera for multi color detection. Using this method, the full width half maximum of the depth response curve can be significantly sharpened, thus improving the vertical measurement resolution and repeatability of the depth detection. From our preliminary experimental evaluation, it is verified that the ±3σ repeatability of the height measurement can be kept within 2% of the overall measurement range.
1985 ACSM-ASPRS Fall Convention, Indianapolis, IN, September 8-13, 1985, Technical Papers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-01-01
Papers are presented on Landsat image data quality analysis, primary data acquisition, cartography, geodesy, land surveying, and the applications of satellite remote sensing data. Topics discussed include optical scanning and interactive color graphics; the determination of astrolatitudes and astrolongitudes using x, y, z-coordinates on the celestial sphere; raster-based contour plotting from digital elevation models using minicomputers or microcomputers; the operational techniques of the GPS when utilized as a survey instrument; public land surveying and high technology; the use of multitemporal Landsat MSS data for studying forest cover types; interpretation of satellite and aircraft L-band synthetic aperture radar imagery; geological analysismore » of Landsat MSS data; and an interactive real time digital image processing system. Consideration is given to a large format reconnaissance camera; creating an optimized color balance for TM and MSS imagery; band combination selection for visual interpretation of thematic mapper data for resource management; the effect of spatial filtering on scene noise and boundary detail in thematic mapper imagery; the evaluation of the geometric quality of thematic mapper photographic data; and the analysis and correction of Landsat 4 and 5 thematic mapper sensor data.« less
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Imagers for digital still photography
NASA Astrophysics Data System (ADS)
Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge
2006-04-01
This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.
Bell, James F.; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; Van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.M.; Madsen, M.B.; Hardgrove, C.; Ravine, M.A.; Jensen, E.; Harker, D.; Anderson, Ryan; Herkenhoff, Kenneth E.; Morris, R.V.; Cisneros, E.; Deen, R.G.
2017-01-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted ~2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) “true color” images, multispectral images in nine additional bands spanning ~400–1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration
NASA Astrophysics Data System (ADS)
Bachche, Shivaji; Oka, Koichi
2013-06-01
This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.
Innovative Camera and Image Processing System to Characterize Cryospheric Changes
NASA Astrophysics Data System (ADS)
Schenk, A.; Csatho, B. M.; Nagarajan, S.
2010-12-01
The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.
Adjustment of multi-CCD-chip-color-camera heads
NASA Astrophysics Data System (ADS)
Guyenot, Volker; Tittelbach, Guenther; Palme, Martin
1999-09-01
The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.
Perceptual Color Characterization of Cameras
Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo
2014-01-01
Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586
NASA Technical Reports Server (NTRS)
Clegg, R. H.; Scherz, J. P.
1975-01-01
Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Making Connections with Digital Data
ERIC Educational Resources Information Center
Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert
2004-01-01
State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…
Fragrant pear sexuality recognition with machine vision
NASA Astrophysics Data System (ADS)
Ma, Benxue; Ying, Yibin
2006-10-01
In this research, a method to identify Kuler fragrant pear's sexuality with machine vision was developed. Kuler fragrant pear has male pear and female pear. They have an obvious difference in favor. To detect the sexuality of Kuler fragrant pear, images of fragrant pear were acquired by CCD color camera. Before feature extraction, some preprocessing is conducted on the acquired images to remove noise and unnecessary contents. Color feature, perimeter feature and area feature of fragrant pear bottom image were extracted by digital image processing technique. And the fragrant pear sexuality was determined by complexity obtained from perimeter and area. In this research, using 128 Kurle fragrant pears as samples, good recognition rate between the male pear and the female pear was obtained for Kurle pear's sexuality detection (82.8%). Result shows this method could detect male pear and female pear with a good accuracy.
Portable plant chlorophyll fluorimeter based on blue LED rapid induced technology
NASA Astrophysics Data System (ADS)
Zheng, Yibo; Mi, Ting; Zhang, Lei; Zhao, Jun
2018-01-01
Fluorimeter is an effective device for detecting chlorophyll a content in plants. In order to realize real-time nondestructive detection of plant blades, a camera based fluorescence instrument based on two color mirrors has been developed. The blue light LED is used as the excitation light source, and the lens is used for shaping and focusing the excitation light to ensure the excitation intensity and uniform illumination of the light source. The device uses a 45 degree two color mirror to separate the chlorophyll a excited light path and the fluorescence receiving light path. Finally, the fluorescent signal is collected by the silicon photocell, and the signal is processed by the circuit to transmit the digital information to the display. Through the analysis of the experimental data, the device has the advantages of small size, easy to carry, fast induction, etc., and can be widely applied in outdoor teaching and field investigation.
False-Color Image of an Impact Crater on Vesta
2011-08-24
NASA Dawn spacecraft obtained this false-color image right of an impact crater in asteroid Vesta equatorial region with its framing camera on July 25, 2011. The view on the left is from the camera clear filter.
Procurement specification color graphic camera system
NASA Technical Reports Server (NTRS)
Prow, G. E.
1980-01-01
The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.
Combining color and shape information for illumination-viewpoint invariant object recognition.
Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis
2006-01-01
In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.
Teichmann, A Lina; Nieuwenstein, Mark R; Rich, Anina N
2017-08-01
For digit-color synaesthetes, digits elicit vivid experiences of color that are highly consistent for each individual. The conscious experience of synaesthesia is typically unidirectional: Digits evoke colors but not vice versa. There is an ongoing debate about whether synaesthetes have a memory advantage over non-synaesthetes. One key question in this debate is whether synaesthetes have a general superiority or whether any benefit is specific to a certain type of material. Here, we focus on immediate serial recall and ask digit-color synaesthetes and controls to memorize digit and color sequences. We developed a sensitive staircase method manipulating presentation duration to measure participants' serial recall of both overlearned and novel sequences. Our results show that synaesthetes can activate digit information to enhance serial memory for color sequences. When color sequences corresponded to ascending or descending digit sequences, synaesthetes encoded these sequences at a faster rate than their non-synaesthetes counterparts and faster than non-structured color sequences. However, encoding color sequences is approximately 200 ms slower than encoding digit sequences directly, independent of group and condition, which shows that the translation process is time consuming. These results suggest memory advantages in synaesthesia require a modified dual-coding account, in which secondary (synaesthetically linked) information is useful only if it is more memorable than the primary information to be recalled. Our study further shows that duration thresholds are a sensitive method to measure subtle differences in serial recall performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Lo, Mei-Chun; Hsieh, Tsung-Hsien; Perng, Ruey-Kuen; Chen, Jiong-Qiao
2010-01-01
The aim of this research is to derive illuminant-independent type of HDR imaging modules which can optimally multispectrally reconstruct of every color concerned in high-dynamic-range of original images for preferable cross-media color reproduction applications. Each module, based on either of broadband and multispectral approach, would be incorporated models of perceptual HDR tone-mapping, device characterization. In this study, an xvYCC format of HDR digital camera was used to capture HDR scene images for test. A tone-mapping module was derived based on a multiscale representation of the human visual system and used equations similar to a photoreceptor adaptation equation, proposed by Michaelis-Menten. Additionally, an adaptive bilateral type of gamut mapping algorithm, using approach of a multiple conversing-points (previously derived), was incorporated with or without adaptive Un-sharp Masking (USM) to carry out the optimization of HDR image rendering. An LCD with standard color space of Adobe RGB (D65) was used as a soft-proofing platform to display/represent HDR original RGB images, and also evaluate both renditionquality and prediction-performance of modules derived. Also, another LCD with standard color space of sRGB was used to test gamut-mapping algorithms, used to be integrated with tone-mapping module derived.
Evaluating video digitizer errors
NASA Astrophysics Data System (ADS)
Peterson, C.
2016-01-01
Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
Computerized digital dermoscopy.
Gewirtzman, A J; Braun, R P
2003-01-01
Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.
Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter
2017-01-01
Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038
The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.
Grubert, Anna; Eimer, Martin
2016-02-01
Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.
Streak camera based SLR receiver for two color atmospheric measurements
NASA Technical Reports Server (NTRS)
Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael
1993-01-01
To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data aggregation using normal point approach has provided accurate data fitting techniques and is found to be much more convenient than using the full rate single shot data. The systematic errors from this model have been found to be less than 3 ps for normal points.
NPS assessment of color medical displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-02-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.
NPS assessment of color medical image displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired
Application of multispectral color photography to flame flow visualization
NASA Technical Reports Server (NTRS)
Stoffers, G.
1979-01-01
For flames of short duration and low intensity of radiation a spectroscopical flame diagnostics is difficult. In order to find some other means of extracting information about the flame structure from its radiation, the feasibility of using multispectral color photography was successfully evaluated. Since the flame photographs are close-ups, there is a considerable parallax between the single images, when several cameras are used, and additive color viewing is not possible. Each image must be analyzed individually, it is advisable to use color film in all cameras. One can either use color films of different spectral sensitivities or color films of the same type with different color filters. Sharp cutting filters are recommended.
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
Low-cost digital dynamic visualization system
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
1995-05-01
High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.
Crane, Nicole J; Gillern, Suzanne M; Tajkarimi, Kambiz; Levin, Ira W; Pinto, Peter A; Elster, Eric A
2010-10-01
We report the novel use of 3-charge coupled device camera technology to infer tissue oxygenation. The technique can aid surgeons to reliably differentiate vascular structures and noninvasively assess laparoscopic intraoperative changes in renal tissue perfusion during and after warm ischemia. We analyzed select digital video images from 10 laparoscopic partial nephrectomies for their individual 3-charge coupled device response. We enhanced surgical images by subtracting the red charge coupled device response from the blue response and overlaying the calculated image on the original image. Mean intensity values for regions of interest were compared and used to differentiate arterial and venous vasculature, and ischemic and nonischemic renal parenchyma. The 3-charge coupled device enhanced images clearly delineated the vessels in all cases. Arteries were indicated by an intense red color while veins were shown in blue. Differences in mean region of interest intensity values for arteries and veins were statistically significant (p >0.0001). Three-charge coupled device analysis of pre-clamp and post-clamp renal images revealed visible, dramatic color enhancement for ischemic vs nonischemic kidneys. Differences in the mean region of interest intensity values were also significant (p <0.05). We present a simple use of conventional 3-charge coupled device camera technology in a way that may provide urological surgeons with the ability to reliably distinguish vascular structures during hilar dissection, and detect and monitor changes in renal tissue perfusion during and after warm ischemia. Copyright © 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
A credit card verifier structure using diffraction and spectroscopy concepts
NASA Astrophysics Data System (ADS)
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2008-04-01
We propose and experimentally demonstrate an angle-multiplexing based optical structure for verifying a credit card. Our key idea comes from the fact that the fine detail of the embossed hologram stamped on the credit card is hard to duplicate and therefore its key color features can be used for distinguishing between the real and counterfeit ones. As the embossed hologram is a diffractive optical element, we choose to shine one at a time a number of broadband lightsources, each at different incident angle, on the embossed hologram of the credit card in such a way that different color spectra per incident angle beam is diffracted and separated in space. In this way, the number of pixels of each color plane is investigated. Then we apply a feed forward back propagation neural network configuration to separate the counterfeit credit card from the real one. Our experimental demonstration using two off-the-shelf broadband white light emitting diodes, one digital camera, a 3-layer neural network, and a notebook computer can identify all 69 counterfeit credit cards from eight real credit cards.
Synesthetic color experiences influence memory.
Smilek, Daniel; Dixon, Mike J; Cudahy, Cera; Merikle, Philip M
2002-11-01
We describe the extraordinary memory of C, a 21-year-old student who experiences synesthetic colors (i.e., photisms) when she sees, hears, or thinks of digits. Using three matrices of 50 digits, we tested C and 7 nonsynesthetes to evaluate whether C's synesthetic photisms influence her memory for digits. One matrix consisted of black digits, whereas the other two matrices consisted of digits that were either incongruent or congruent with the colors of C's photisms. C's recall of the incongruently colored digits was considerably poorer than her recall of either the black or the congruently colored digits. The 7 nonsynesthetes did not show such differences in their recall of the matrices. In addition, when immediate recall of the black digits was compared with delayed recall of those digits (48 hr), C showed no decrease in performance, whereas each of the nonsynesthetes showed a significant decrease. These findings both demonstrate C's extraordinary memory and show that her synesthetic photisms can influence her memory for digits.
Radiation Dosimetry via Automated Fluorescence Microscopy
NASA Technical Reports Server (NTRS)
Castleman, Kenneth R.; Schulze, Mark
2005-01-01
A developmental instrument for assessment of radiation-induced damage in human lymphocytes includes an automated fluorescence microscope equipped with a one or more chargecoupled- device (CCD) video camera(s) and circuitry to digitize the video output. The microscope is also equipped with a three-axis translation stage that includes a rotation stage, and a rotary tray that holds as many as thirty specimen slides. The figure depicts one version of the instrument. Once the slides have been prepared and loaded into the tray, the instrument can operate unattended. A computer controls the operation of the stage, tray, and microscope, and processes the digital fluorescence-image data to recognize and count chromosomes that have been broken, presumably by radiation. The design and method of operation of the instrument exploit fluorescence in situ hybridization (FISH) of metaphase chromosome spreads, which is a technique that has been found to be valuable for monitoring the radiation dose to circulating lymphocytes. In the specific FISH protocol used to prepare specimens for this instrument, metaphase lymphocyte cultures are chosen for high mitotic index and highly condensed chromosomes, then several of the largest chromosomes are labeled with three of four differently colored whole-chromosome-staining dyes. The three dyes, which are used both individually and in various combinations, are fluorescein isothiocyanate (FITC), Texas Red (or equivalent), and Cy5 (or equivalent); The fourth dye 4',6-diamidino- 2-phenylindole (DAPI) is used as a counterstain. Under control by the computer, the microscope is automatically focused on the cells and each slide is scanned while the computer analyzes the DAPI-fluorescence images to find the metaphases. Each metaphase field is recentered in the field of view and refocused. Then a four-color image (more precisely, a set of images of the same view in the fluorescent colors of the four dyes) is acquired. By use of pattern-recognition software developed specifically for this instrument, the images in the various colors are processed to recognize the metaphases and count the chromosome fragments of each color within the metaphases. The intermediate results are then further processed to estimate the proportion of cells that have suffered genetic damage. The prototype instrument scans at an average areal rate of 4.7 mm2/h in unattended operation, finding about 14 metaphases per hour. The false-alarm rate is typically less than 3 percent, and the metaphase-miss rate has been estimated to be less than 5 percent. The counts of chromosomes and fragments thereof are 50 to 70 percent accurate.
Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai
2014-01-01
Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350
Compact LED based LCOS optical engine for mobile projection
NASA Astrophysics Data System (ADS)
Zhang, Wenzi; Li, Xiaoyan; Liu, Qinxiao; Yu, Feihong
2009-11-01
With the development of high power LED (light emitting diode) technology and color filter LCOS (liquid crystal on silicon) technology, the research on LED based micro optical engine for mobile projection has been a hot topic recently. In this paper one compact LED powered LCOS optical engine design is presented, which is intended to be embedded in cell phone, digital camera, and so on. Compared to DLP (digital light processor) and traditional color sequential LCOS technology, the color filter based LCOS panel is chosen for the compact optical engine, this is because only white LED is needed. To further decrease the size of the optical engine, only one specifically designed plastic free form lens is applied in the illumination part of the optical engine. This free form lens is designed so that it plays the roles of both condenser and integrator, by which the output light of LED is condensed and redistributed, and light illumination of high efficiency, high uniformity and small incident angle on LCOS is acquired. Besides PBS (polarization beam splitter), LCOS, and projection lens, the compact optical engine contains only this piece of free form plastic lens, which can be produced by plastic injection molding. Finally a white LED powered LCOS optical engine with a compact size of less than 6.6 cc can be acquired. With the ray tracing simulation result, the light efficiency analysis shows that the output flux is over 8.5 ANSI lumens and the ANSI uniformity of over 80%.
Temperature measurement with industrial color camera devices
NASA Astrophysics Data System (ADS)
Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen
1999-05-01
This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.
Solar System Portrait - View of the Sun, Earth and Venus
1996-09-13
This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The "rays" around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun shade. From Voyager's great distance both Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. Detailed analysis also suggests that Voyager detected the moon as well, but it is too faint to be seen without special processing. Venus was only 0.11 pixel in diameter. The faint colored structure in both planetary frames results from sunlight scattered in the optics. http://photojournal.jpl.nasa.gov/catalog/PIA00450
Solar System Portrait - View of the Sun, Earth and Venus
NASA Technical Reports Server (NTRS)
1990-01-01
This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The 'rays' around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun shade. From Voyager's great distance both Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. Detailed analysis also suggests that Voyager detected the moon as well, but it is too faint to be seen without special processing. Venus was only 0.11 pixel in diameter. The faint colored structure in both planetary frames results from sunlight scattered in the optics.
Color congruity effect: where do colors and numbers interact in synesthesia?
Cohen Kadosh, Roi; Henik, Avishai
2006-02-01
The traditional size congruity paradigm is a Stroop-like situation where participants are asked to compare the values of two digits and ignore the irrelevant physical sizes of the digits (e.g., 3 5). Here a color congruity paradigm was employed and the irrelevant physical sizes were replaced by irrelevant colors. MM, a digit-color synesthete, yielded the classical congruity effect. Namely, she was slower to identify numerically larger numbers when they deviated from her synesthetic experience than when they matched it. In addition, the effect of color on her comparative judgments was modulated by numerical distance. In contrast, performance of non-synesthetes was not affected by the colors. On the basis of neurophysiological studies of magnitude comparison and interference between numerical and physical information, it is proposed that the interaction between colors and digits in MM occurs at the conceptual level. Moreover, by using the current paradigm it is possible to determine the stage at which color-digit binding in synesthesia occurs.
Methods for identification of images acquired with digital cameras
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki
2001-02-01
From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
NASA Astrophysics Data System (ADS)
Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh
2016-03-01
We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.
Characterization of flotation color by machine vision
NASA Astrophysics Data System (ADS)
Siren, Ari
1999-09-01
Flotation is the most common industrial method by which valuable minerals are separated from waste rock after crushing and grinding the ore. For process control, flotation plants and devices are equipped with conventional and specialized sensors. However, certain variables are left to the visual observation of the operator, such as the color of the froth and the size of the bubbles in the froth. The ChaCo-Project (EU-Project 24931) was launched in November 1997. In this project a measuring station was built at the Pyhasalmi flotation plant. The system includes an RGB camera and a spectral color measuring instrument for the color inspection of the flotation. The RGB camera or visible spectral range is also measured to compare the operators' comments on the color of the froth relating to the sphalerite concentration and the process balance. Different dried mineral (sphalerite) ratios were studied with iron pyrite to find out about the minerals' typical spectral features. The correlation between sphalerite spectral reflectance and sphalerite concentration over various wavelengths are used to select the proper camera system with filters or to compare the results with the color information from the RGB camera. Various machine vision candidate techniques are discussed for this application and the preprocessed information of the dried mineral colors is used and adapted to the online measuring station. Moving froth bubbles produce total reflections, disturbing the color information. Polarization filters are used and the results are reported. Also the reflectance outside the visible light is studied and reported.
The Artist, the Color Copier, and Digital Imaging.
ERIC Educational Resources Information Center
Witte, Mary Stieglitz
The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…
Recent developments in space shuttle remote sensing, using hand-held film cameras
NASA Technical Reports Server (NTRS)
Amsbury, David L.; Bremer, Jeffrey M.
1992-01-01
The authors report on the advantages and disadvantages of a number of camera systems which are currently employed for space shuttle remote sensing operations. Systems discussed include the modified Hasselbad, the Rolleiflex 6008, the Linkof 5-inch format system, and the Nikon F3/F4 systems. Film/filter combinations (color positive films, color infrared films, color negative films and polarization filters) are presented.
GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping
Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan
2016-01-01
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables. PMID:27529547
GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping.
Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan
2016-01-01
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables.
Digital devices: big challenge in color management
NASA Astrophysics Data System (ADS)
Vauderwange, Oliver; Curticapean, Dan; Dreβler, Paul; Wozniak, Peter
2014-09-01
The paper will present how the students learn to find technical solutions in color management by using adequate digital devices and recognize the specific upcoming tasks in this area. Several issues, problems and their solutions will be discussed. The scientific background offer specific didactical solutions in this area of optics. Color management is the major item of this paper. Color management is a crucial responsibility for media engineers and designers. Print, screen and mobile applications must independently display the same colors. Predictability and consistency in the color representation are the aims of a color management system. This is only possible in a standardized and audited production workflow. Nowadays digital media have a fast-paced development process. An increasing number of different digital devices with different display sizes and display technologies are a great challenge for every color management system. The authors will present their experience in the field of color management. The design and development of a suitable learning environment with the required infrastructure is in the focus. The combination of theoretical and practical lectures creates a deeper understanding in the area of the digital color representation.
Imaging Emission Spectra with Handheld and Cellphone Cameras
NASA Astrophysics Data System (ADS)
Sitar, David
2012-12-01
As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.
NASA Astrophysics Data System (ADS)
Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried
2013-02-01
In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
EAARL coastal topography and imagery–Western Louisiana, post-Hurricane Rita, 2005: First surface
Bonisteel-Cormier, Jamie M.; Wright, Wayne C.; Fredericks, Alexandra M.; Klipp, Emily S.; Nagle, Doug B.; Sallenger, Asbury H.; Brock, John C.
2013-01-01
These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived first-surface (FS) topography datasets were produced by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, Florida, and the National Aeronautics and Space Administration (NASA), Wallops Flight Facility, Virginia. This project provides highly detailed and accurate datasets of a portion of the Louisiana coastline beachface, acquired post-Hurricane Rita on September 27-28 and October 2, 2005. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine Cessna 310 aircraft, but the instrument may be deployed on a range of light aircraft. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Lidar for Science and Resource Management Website.
Bonisteel-Cormier, J.M.; Nayegandhi, Amar; Brock, J.C.; Wright, C.W.; Nagle, D.B.; Klipp, E.S.; Vivekanandan, Saisudha; Fredericks, Xan; Stevens, Sara
2010-01-01
These remotely sensed, geographically referenced color-infrared (CIR) imagery and elevation measurements of lidar-derived bare-earth (BE) and first-surface (FS) topography datasets were produced collaboratively by the U.S. Geological Survey (USGS), St. Petersburg Coastal and Marine Science Center, St. Petersburg, FL, and the National Park Service (NPS), Northeast Coastal and Barrier Network, Kingston, RI. This project provides highly detailed and accurate datasets of a portion of the Assateague Island National Seashore in Maryland and Virginia, acquired post-Nor'Ida (November 2009 nor'easter) on November 28 and 30, 2009. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the NASA Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar(EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the 'bare earth' under vegetation from a point cloud of last return elevations. For more information about similar projects, please visit the Decision Support for Coastal Science and Management website.
Measuring Distances Using Digital Cameras
ERIC Educational Resources Information Center
Kendal, Dave
2007-01-01
This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…
Camera! Action! Collaborate with Digital Moviemaking
ERIC Educational Resources Information Center
Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.
2007-01-01
Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…
NASA Astrophysics Data System (ADS)
Gamadia, Mark Noel
In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.
Hyperspectral imaging using a color camera and its application for pathogen detection
USDA-ARS?s Scientific Manuscript database
This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...
Rajshekhar, Gannavarpu; Gorthi, Sai Siva; Rastogi, Pramod
2011-12-01
The paper introduces a method for simultaneously measuring the in-plane and out-of-plane displacement derivatives of a deformed object in digital holographic interferometry. In the proposed method, lasers of different wavelengths are used to simultaneously illuminate the object along various directions such that a unique wavelength is used for a given direction. The holograms formed by multiple reference-object beam pairs of different wavelengths are recorded by a 3-color CCD camera with red, green, and blue channels. Each channel stores the hologram related to the corresponding wavelength and hence for the specific direction. The complex reconstructed interference field is obtained for each wavelength by numerical reconstruction and digital processing of the recorded holograms before and after deformation. Subsequently, the phase derivative is estimated for a given wavelength using two-dimensional pseudo Wigner-Ville distribution and the in-plane and out-of-plane components are obtained from the estimated phase derivatives using the sensitivity vectors of the optical configuration. © 2011 Optical Society of America
NASA Astrophysics Data System (ADS)
Wang, Zhi-shan; Zhao, Yue-jin; Li, Zhuo; Dong, Liquan; Chu, Xuhong; Li, Ping
2010-11-01
The comparison goniometer is widely used to measure and inspect small angle, angle difference, and parallelism of two surfaces. However, the common manner to read a comparison goniometer is to inspect the ocular of the goniometer by one eye of the operator. To read an old goniometer that just equips with one adjustable ocular is a difficult work. In the fabrication of an IR reflecting mirrors assembly, a common comparison goniometer is used to measure the angle errors between two neighbor assembled mirrors. In this paper, a quick reading technique image-based for the comparison goniometer used to inspect the parallelism of mirrors in a mirrors assembly is proposed. One digital camera, one comparison goniometer and one set of computer are used to construct a reading system, the image of the sight field in the comparison goniometer will be extracted and recognized to get the angle positions of the reflection surfaces to be measured. In order to obtain the interval distance between the scale lines, a particular technique, left peak first method, based on the local peak values of intensity in the true color image is proposed. A program written in VC++6.0 has been developed to perform the color digital image processing.
High Scalability Video ISR Exploitation
2012-10-01
Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown
Helicopter-based Photography for use in SfM over the West Greenland Ablation Zone
NASA Astrophysics Data System (ADS)
Mote, T. L.; Tedesco, M.; Astuti, I.; Cotten, D.; Jordan, T.; Rennermalm, A. K.
2015-12-01
Results of low-elevation high-resolution aerial photography from a helicopter are reported for a supraglacial watershed in West Greenland. Data were collected at the end of July 2015 over a supraglacial watershed terminating in the Kangerlussuaq region of Greenland and following the Utrecht University K-Transect of meteorological stations. The aerial photography reported here were complementary observations used to support hyperspectral measurements of albedo, discussed in the Greenland Ice sheet hydrology session of this AGU Fall meeting. A compact digital camera was installed inside a pod mounted on the side of the helicopter together with gyroscopes and accelerometers that were used to estimate the relative orientation. Continuous video was collected on 19 and 21 July flights, and frames extracted from the videos are used to create a series of aerial photos. Individual geo-located aerial photos were also taken on a 24 July flight. We demonstrate that by maintaining a constant flight elevation and a near constant ground speed, a helicopter with a mounted camera can produce 3-D structure of the ablation zone of the ice sheet at unprecedented spatial resolution of the order of 5 - 10 cm. By setting the intervalometer on the camera to 2 seconds, the images obtained provide sufficient overlap (>60%) for digital image alignment, even at a flight elevation of ~170m. As a result, very accurate point matching between photographs can be achieved and an extremely dense RGB encoded point cloud can be extracted. Overlapping images provide a series of stereopairs that can be used to create point cloud data consisting of 3 position and 3 color variables, X, Y, Z, R, G, and B. This point cloud is then used to create orthophotos or large scale digital elevation models, thus accurately displaying ice structure. The geo-referenced images provide a ground spatial resolution of approximately 6 cm, permitting analysis of detailed features, such as cryoconite holes, evolving small order streams, and cracks from hydrofracturing.
Teaching of color in predoctoral and postdoctoral dental education in 2009.
Paravina, Rade D; O'Neill, Paula N; Swift, Edward J; Nathanson, Dan; Goodacre, Charles J
2010-01-01
The goal of the study was to determine the current status of the teaching of color in dental education at both the predoctoral (Pre-D) and postdoctoral (Post-D) levels. A cross-sectional web-based survey, containing 27 multiple choice, multiple best and single best answers was created. Upon receiving the administrative approval, dental faculty involved in the teaching of color to Pre-D or Post-D dental students from around the world (N=205), were administered a survey. Statistical analysis of differences between Pre-D and Post-D was performed using Chi-square test (α=0.05). A total of 130 responses were received (response rate 63.4%); there were 70 responses from North America, 40 from Europe, 10 from South America, nine from Asia and one from Africa. A course on "color" or "color in dentistry" was included in the dental curriculum of 80% of Pre-D programs and 82% of Post-D programs. The number of hours dedicated to color-related topics was 4.0±2.4 for Pre-D and 5.5±2.9 for Post-D, respectively (p<0.01). Topics associated with tooth color, shade matching method, tooth whitening, and teaching of appearance parameters other than color, were frequently taught. Significant differences were recorded between the number of hours dedicated to teaching of color at predoctoral and postdoctoral level. The same is true for the prosthodontics and restorative courses, teaching on negative after images; color rendering index, Bleachedguide 3D-Master shade guide, digital camera and lens selection, composite resins, and maxillofacial prosthetic materials. Except for the restorative courses and composite resins, significantly higher results were recorded for Post-D programs. Vitapan Classical and 3D-Master were the most frequently taught shade guides. Copyright © 2010 Elsevier Ltd. All rights reserved.
Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji
2015-05-01
Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Color engineering in the age of digital convergence
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay W.
1998-09-01
Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
Earth Observations taken by Expedition 41 crewmember
2014-09-13
ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.
System for objective assessment of image differences in digital cinema
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Krasula, Lukáš; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2014-09-01
There is high demand for quick digitization and subsequent image restoration of archived film records. Digitization is very urgent in many cases because various invaluable pieces of cultural heritage are stored on aging media. Only selected records can be reconstructed perfectly using painstaking manual or semi-automatic procedures. This paper aims to answer the question what are the quality requirements on the restoration process in order to obtain acceptably close visual perception of the digitally restored film in comparison to the original analog film copy. This knowledge is very important to preserve the original artistic intention of the movie producers. Subjective experiment with artificially distorted images has been conducted in order to answer the question what is the visual impact of common image distortions in digital cinema. Typical color and contrast distortions were introduced and test images were presented to viewers using digital projector. Based on the outcome of this subjective evaluation a system for objective assessment of image distortions has been developed and its performance tested. The system utilizes calibrated digital single-lens reflex camera and subsequent analysis of suitable features of images captured from the projection screen. The evaluation of captured image data has been optimized in order to obtain predicted differences between the reference and distorted images while achieving high correlation with the results of subjective assessment. The system can be used to objectively determine the difference between analog film and digital cinema images on the projection screen.
NASA Astrophysics Data System (ADS)
Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald
2014-03-01
High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.
Organize Your Digital Photos: Display Your Images Without Hogging Hard-Disk Space
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2005-01-01
According to InfoTrends/CAP Ventures, by the end of this year more than 55 percent of all U.S. households will own at least one digital camera. With so many digital cameras in use, it is important for people to understand how to organize and store digital images in ways that make them easy to find. Additionally, today's affordable, large megapixel…
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
Data annotation, recording and mapping system for the US open skies aircraft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, B.W.; Goede, W.F.; Farmer, R.G.
1996-11-01
This paper discusses the system developed by Northrop Grumman for the Defense Nuclear Agency (DNA), US Air Force, and the On-Site Inspection Agency (OSIA) to comply with the data annotation and reporting provisions of the Open Skies Treaty. This system, called the Data Annotation, Recording and Mapping System (DARMS), has been installed on the US OC-135 and meets or exceeds all annotation requirements for the Open Skies Treaty. The Open Skies Treaty, which will enter into force in the near future, allows any of the 26 signatory countries to fly fixed wing aircraft with imaging sensors over any of themore » other treaty participants, upon very short notice, and with no restricted flight areas. Sensor types presently allowed by the treaty are: optical framing and panoramic film cameras; video cameras ranging from analog PAL color television cameras to the more sophisticated digital monochrome and color line scanning or framing cameras; infrared line scanners; and synthetic aperture radars. Each sensor type has specific performance parameters which are limited by the treaty, as well as specific annotation requirements which must be achieved upon full entry into force. DARMS supports U.S. compliance with the Opens Skies Treaty by means of three subsystems: the Data Annotation Subsytem (DAS), which annotates sensor media with data obtained from sensors and the aircraft`s avionics system; the Data Recording System (DRS), which records all sensor and flight events on magnetic media for later use in generating Treaty mandated mission reports; and the Dynamic Sensor Mapping Subsystem (DSMS), which provides observers and sensor operators with a real-time moving map displays of the progress of the mission, complete with instantaneous and cumulative sensor coverages. This paper will describe DARMS and its subsystems in greater detail, along with the supporting avionics sub-systems. 7 figs.« less
A Simple Spectrophotometer Using Common Materials and a Digital Camera
ERIC Educational Resources Information Center
Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal
2011-01-01
A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…
Imaging Emission Spectra with Handheld and Cellphone Cameras
ERIC Educational Resources Information Center
Sitar, David
2012-01-01
As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…
Quantifying plant colour and colour difference as perceived by humans using digital images.
Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.
Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images
Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275
Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras
1990-04-01
poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital
A Picture is Worth a Thousand Words
ERIC Educational Resources Information Center
Davison, Sarah
2009-01-01
Lions, tigers, and bears, oh my! Digital cameras, young inquisitive scientists, give it a try! In this project, students create an open-ended question for investigation, capture and record their observations--data--with digital cameras, and create a digital story to share their findings. The project follows a 5E learning cycle--Engage, Explore,…
Software Graphical User Interface For Analysis Of Images
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn
1992-01-01
CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.
Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean
Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave
2009-01-01
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729
Google Glass for Documentation of Medical Findings: Evaluation in Forensic Medicine
2014-01-01
Background Google Glass is a promising premarket device that includes an optical head-mounted display. Several proof of concept reports exist, but there is little scientific evidence regarding its use in a medical setting. Objective The objective of this study was to empirically determine the feasibility of deploying Glass in a forensics setting. Methods Glass was used in combination with a self-developed app that allowed for hands-free operation during autopsy and postmortem examinations of 4 decedents performed by 2 physicians. A digital single-lens reflex (DSLR) camera was used for image comparison. In addition, 6 forensic examiners (3 male, 3 female; age range 23-48 years, age mean 32.8 years, SD 9.6; mean work experience 6.2 years, SD 8.5) were asked to evaluate 159 images for image quality on a 5-point Likert scale, specifically color discrimination, brightness, sharpness, and their satisfaction with the acquired region of interest. Statistical evaluations were performed to determine how Glass compares with conventionally acquired digital images. Results All images received good (median 4) and very good ratings (median 5) for all 4 categories. Autopsy images taken by Glass (n=32) received significantly lower ratings than those acquired by DSLR camera (n=17) (region of interest: z=–5.154, P<.001; sharpness: z=–7.898, P<.001; color: z=–4.407, P<.001, brightness: z=–3.187, P=.001). For 110 images of postmortem examinations (Glass: n=54, DSLR camera: n=56), ratings for region of interest (z=–8.390, P<.001) and brightness (z=–540, P=.007) were significantly lower. For interrater reliability, intraclass correlation (ICC) values were good for autopsy (ICC=.723, 95% CI .667-.771, P<.001) and postmortem examination (ICC=.758, 95% CI .727-.787, P<.001). Postmortem examinations performed using Glass took 42.6 seconds longer than those done with the DSLR camera (z=–2.100, P=.04 using Wilcoxon signed rank test). The battery charge of Glass quickly decreased; an average 5.5% (SD 1.85) of its battery capacity was spent per postmortem examination (0.81% per minute or 0.79% per picture). Conclusions Glass was efficient for acquiring images for documentation in forensic medicine, but the image quality was inferior compared to a DSLR camera. Images taken with Glass received significantly lower ratings for all 4 categories in an autopsy setting and for region of interest and brightness in postmortem examination. The effort necessary for achieving the objectives was higher when using the device compared to the DSLR camera thus extending the postmortem examination duration. Its relative high power consumption and low battery capacity is also a disadvantage. At the current stage of development, Glass may be an adequate tool for education. For deployment in clinical care, issues such as hygiene, data protection, and privacy need to be addressed and are currently limiting chances for professional use. PMID:24521935
Google Glass for documentation of medical findings: evaluation in forensic medicine.
Albrecht, Urs-Vito; von Jan, Ute; Kuebler, Joachim; Zoeller, Christoph; Lacher, Martin; Muensterer, Oliver J; Ettinger, Max; Klintschar, Michael; Hagemeier, Lars
2014-02-12
Google Glass is a promising premarket device that includes an optical head-mounted display. Several proof of concept reports exist, but there is little scientific evidence regarding its use in a medical setting. The objective of this study was to empirically determine the feasibility of deploying Glass in a forensics setting. Glass was used in combination with a self-developed app that allowed for hands-free operation during autopsy and postmortem examinations of 4 decedents performed by 2 physicians. A digital single-lens reflex (DSLR) camera was used for image comparison. In addition, 6 forensic examiners (3 male, 3 female; age range 23-48 years, age mean 32.8 years, SD 9.6; mean work experience 6.2 years, SD 8.5) were asked to evaluate 159 images for image quality on a 5-point Likert scale, specifically color discrimination, brightness, sharpness, and their satisfaction with the acquired region of interest. Statistical evaluations were performed to determine how Glass compares with conventionally acquired digital images. All images received good (median 4) and very good ratings (median 5) for all 4 categories. Autopsy images taken by Glass (n=32) received significantly lower ratings than those acquired by DSLR camera (n=17) (region of interest: z=-5.154, P<.001; sharpness: z=-7.898, P<.001; color: z=-4.407, P<.001, brightness: z=-3.187, P=.001). For 110 images of postmortem examinations (Glass: n=54, DSLR camera: n=56), ratings for region of interest (z=-8.390, P<.001) and brightness (z=-540, P=.007) were significantly lower. For interrater reliability, intraclass correlation (ICC) values were good for autopsy (ICC=.723, 95% CI .667-.771, P<.001) and postmortem examination (ICC=.758, 95% CI .727-.787, P<.001). Postmortem examinations performed using Glass took 42.6 seconds longer than those done with the DSLR camera (z=-2.100, P=.04 using Wilcoxon signed rank test). The battery charge of Glass quickly decreased; an average 5.5% (SD 1.85) of its battery capacity was spent per postmortem examination (0.81% per minute or 0.79% per picture). Glass was efficient for acquiring images for documentation in forensic medicine, but the image quality was inferior compared to a DSLR camera. Images taken with Glass received significantly lower ratings for all 4 categories in an autopsy setting and for region of interest and brightness in postmortem examination. The effort necessary for achieving the objectives was higher when using the device compared to the DSLR camera thus extending the postmortem examination duration. Its relative high power consumption and low battery capacity is also a disadvantage. At the current stage of development, Glass may be an adequate tool for education. For deployment in clinical care, issues such as hygiene, data protection, and privacy need to be addressed and are currently limiting chances for professional use.
Color reproduction with a smartphone
NASA Astrophysics Data System (ADS)
Thoms, Lars-Jochen; Colicchia, Giuseppe; Girwidz, Raimund
2013-10-01
The world is full of colors. Most of the colors we see around us can be created on common digital displays simply by superposing light with three different wavelengths. However, no mixture of colors can produce a fully pure color identical to a spectral color. Using a smartphone, students can investigate the main features of primary color addition and understand how colors are made on digital displays.
Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance.
Shenai, Mahesh B; Dillavou, Marcus; Shum, Corey; Ross, Douglas; Tubbs, Richard S; Shih, Alan; Guthrie, Barton L
2011-03-01
Surgery is a highly technical field that combines continuous decision-making with the coordination of spatiovisual tasks. We designed a virtual interactive presence and augmented reality (VIPAR) platform that allows a remote surgeon to deliver real-time virtual assistance to a local surgeon, over a standard Internet connection. The VIPAR system consisted of a "local" and a "remote" station, each situated over a surgical field and a blue screen, respectively. Each station was equipped with a digital viewpiece, composed of 2 cameras for stereoscopic capture, and a high-definition viewer displaying a virtual field. The virtual field was created by digitally compositing selected elements within the remote field into the local field. The viewpieces were controlled by workstations mutually connected by the Internet, allowing virtual remote interaction in real time. Digital renderings derived from volumetric MRI were added to the virtual field to augment the surgeon's reality. For demonstration, a fixed-formalin cadaver head and neck were obtained, and a carotid endarterectomy (CEA) and pterional craniotomy were performed under the VIPAR system. The VIPAR system allowed for real-time, virtual interaction between a local (resident) and remote (attending) surgeon. In both carotid and pterional dissections, major anatomic structures were visualized and identified. Virtual interaction permitted remote instruction for the local surgeon, and MRI augmentation provided spatial guidance to both surgeons. Camera resolution, color contrast, time lag, and depth perception were identified as technical issues requiring further optimization. Virtual interactive presence and augmented reality provide a novel platform for remote surgical assistance, with multiple applications in surgical training and remote expert assistance.
Using DSLR cameras in digital holography
NASA Astrophysics Data System (ADS)
Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge
2017-08-01
In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.
Studying mixing in Non-Newtonian blue maize flour suspensions using color analysis.
Trujillo-de Santiago, Grissel; Rojas-de Gante, Cecilia; García-Lara, Silverio; Ballescá-Estrada, Adriana; Alvarez, Mario Moisés
2014-01-01
Non-Newtonian fluids occur in many relevant flow and mixing scenarios at the lab and industrial scale. The addition of acid or basic solutions to a non-Newtonian fluid is not an infrequent operation, particularly in Biotechnology applications where the pH of Non-Newtonian culture broths is usually regulated using this strategy. We conducted mixing experiments in agitated vessels using Non-Newtonian blue maize flour suspensions. Acid or basic pulses were injected to reveal mixing patterns and flow structures and to follow their time evolution. No foreign pH indicator was used as blue maize flours naturally contain anthocyanins that act as a native, wide spectrum, pH indicator. We describe a novel method to quantitate mixedness and mixing evolution through Dynamic Color Analysis (DCA) in this system. Color readings corresponding to different times and locations within the mixing vessel were taken with a digital camera (or a colorimeter) and translated to the CIELab scale of colors. We use distances in the Lab space, a 3D color space, between a particular mixing state and the final mixing point to characterize segregation/mixing in the system. Blue maize suspensions represent an adequate and flexible model to study mixing (and fluid mechanics in general) in Non-Newtonian suspensions using acid/base tracer injections. Simple strategies based on the evaluation of color distances in the CIELab space (or other scales such as HSB) can be adapted to characterize mixedness and mixing evolution in experiments using blue maize suspensions.
Color (RGB) imaging laser radar
NASA Astrophysics Data System (ADS)
Ferri De Collibus, M.; Bartolini, L.; Fornetti, G.; Francucci, M.; Guarneri, M.; Nuvoli, M.; Paglia, E.; Ricci, R.
2008-03-01
We present a new color (RGB) imaging 3D laser scanner prototype recently developed in ENEA, Italy). The sensor is based on AM range finding technique and uses three distinct beams (650nm, 532nm and 450nm respectively) in monostatic configuration. During a scan the laser beams are simultaneously swept over the target, yielding range and three separated channels (R, G and B) of reflectance information for each sampled point. This information, organized in range and reflectance images, is then elaborated to produce very high definition color pictures and faithful, natively colored 3D models. Notable characteristics of the system are the absence of shadows in the acquired reflectance images - due to the system's monostatic setup and intrinsic self-illumination capability - and high noise rejection, achieved by using a narrow field of view and interferential filters. The system is also very accurate in range determination (accuracy better than 10 -4) at distances up to several meters. These unprecedented features make the system particularly suited to applications in the domain of cultural heritage preservation, where it could be used by conservators for examining in detail the status of degradation of frescoed walls, monuments and paintings, even at several meters of distance and in hardly accessible locations. After providing some theoretical background, we describe the general architecture and operation modes of the color 3D laser scanner, by reporting and discussing first experimental results and comparing high-definition color images produced by the instrument with photographs of the same subjects taken with a Nikon D70 digital camera.
Objective evaluation of slanted edge charts
NASA Astrophysics Data System (ADS)
Hornung, Harvey (.
2015-01-01
Camera objective characterization methodologies are widely used in the digital camera industry. Most objective characterization systems rely on a chart with specific patterns, a software algorithm measures a degradation or difference between the captured image and the chart itself. The Spatial Frequency Response (SFR) method, which is part of the ISO 122331 standard, is now very commonly used in the imaging industry, it is a very convenient way to measure a camera Modulation transfer function (MTF). The SFR algorithm can measure frequencies beyond the Nyquist frequency thanks to super-resolution, so it does provide useful information on aliasing and can provide modulation for frequencies between half Nyquist and Nyquist on all color channels of a color sensor with a Bayer pattern. The measurement process relies on a chart that is simple to manufacture: a straight transition from a bright reflectance to a dark one (black and white for instance), while a sine chart requires handling precisely shades of gray which can also create all sort of issues with printers that rely on half-toning. However, no technology can create a perfect edge, so it is important to assess the quality of the chart and understand how it affects the accuracy of the measurement. In this article, I describe a protocol to characterize the MTF of a slanted edge chart, using a high-resolution flatbed scanner. The main idea is to use the RAW output of the scanner as a high-resolution micro-densitometer, since the signal is linear it is suitable to measure the chart MTF using the SFR algorithm. The scanner needs to be calibrated in sharpness: the scanner MTF is measured with a calibrated sine chart and inverted to compensate for the modulation loss from the scanner. Then the true chart MTF is computed. This article compares measured MTF from commercial charts and charts printed on printers, and also compares how of the contrast of the edge (using different shades of gray) can affect the chart MTF, then concludes on what distance range and camera resolution the chart can reliably measure the camera MTF.
NASA Astrophysics Data System (ADS)
Mikhalev, Aleksandr; Podlesny, Stepan; Stoeva, Penka
2016-09-01
To study dynamics of the upper atmosphere, we consider results of the night sky photometry, using a color CCD camera and taking into account the night airglow and features of its spectral composition. We use night airglow observations for 2010-2015, which have been obtained at the ISTP SB RAS Geophysical Observatory (52° N, 103° E) by the camera with KODAK KAI-11002 CCD sensor. We estimate the average brightness of the night sky in R, G, B channels of the color camera for eastern Siberia with typical values ranging from ~0.008 to 0.01 erg*cm-2*s-1. Besides, we determine seasonal variations in the night sky luminosities in R, G, B channels of the color camera. In these channels, luminosities decrease in spring, increase in autumn, and have a pronounced summer maximum, which can be explained by scattered light and is associated with the location of the Geophysical Observatory. We consider geophysical phenomena with their optical effects in R, G, B channels of the color camera. For some geophysical phenomena (geomagnetic storms, sudden stratospheric warmings), we demonstrate the possibility of the quantitative relationship between enhanced signals in R and G channels and increases in intensities of discrete 557.7 and 630 nm emissions, which are predominant in the airglow spectrum.
NASA Astrophysics Data System (ADS)
Nikolashkin, S. V.; Reshetnikov, A. A.
2017-11-01
The system of video surveillance during active rocket experiments in the Polar geophysical observatory "Tixie" and studies of the effects of "Soyuz" vehicle launches from the "Vostochny" cosmodrome over the territory of the Republic of Sakha (Yakutia) is presented. The created system consists of three AHD video cameras with different angles of view mounted on a common platform mounted on a tripod with the possibility of manual guiding. The main camera with high-sensitivity black and white CCD matrix SONY EXview HADII is equipped depending on the task with lenses "MTO-1000" (F = 1000 mm) or "Jupiter-21M " (F = 300 mm) and is designed for more detailed shooting of luminous formations. The second camera of the same type, but with a 30 degree angle of view. It is intended for shooting of the general plan and large objects, and also for a binding of coordinates of object on stars. The third color wide-angle camera (120 degrees) is designed to be connected to landmarks in the daytime, the optical axis of this channel is directed at 60 degrees down. The data is recorded on the hard disk of a four-channel digital video recorder. Tests of the original version of the system with two channels were conducted during the launch of the geophysical rocket in Tixie in September 2015 and showed its effectiveness.
A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares
NASA Technical Reports Server (NTRS)
Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.
1989-01-01
Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.
Accurate and cost-effective MTF measurement system for lens modules of digital cameras
NASA Astrophysics Data System (ADS)
Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu
2007-01-01
For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.
Use of Standardized, Quantitative Digital Photography in a Multicenter Web-based Study
Molnar, Joseph A.; Lew, Wesley K.; Rapp, Derek A.; Gordon, E. Stanley; Voignier, Denise; Rushing, Scott; Willner, William
2009-01-01
Objective: We developed a Web-based, blinded, prospective, randomized, multicenter trial, using standardized digital photography to clinically evaluate hand burn depth and accurately determine wound area with digital planimetry. Methods: Photos in each center were taken with identical digital cameras with standardized settings on a custom backdrop developed at Wake Forest University containing a gray, white, black, and centimeter scale. The images were downloaded, transferred via the Web, and stored on servers at the principal investigator's home institution. Color adjustments to each photo were made using Adobe Photoshop 6.0 (Adobe, San Jose, Calif). In an initial pilot study, model hands marked with circles of known areas were used to determine the accuracy of the planimetry technique. Two-dimensional digital planimetry using SigmaScan Pro 5.0 (SPSS Science, Chicago, Ill) was used to calculate wound area from the digital images. Results: Digital photography is a simple and cost-effective method for quantifying wound size when used in conjunction with digital planimetry (SigmaScan) and photo enhancement (Adobe Photoshop) programs. The accuracy of the SigmaScan program in calculating predetermined areas was within 4.7% (95% CI, 3.4%–5.9%). Dorsal hand burns of the initial 20 patients in a national study involving several centers were evaluated with this technique. Images obtained by individuals denying experience in photography proved reliable and useful for clinical evaluation and quantification of wound area. Conclusion: Standardized digital photography may be used quantitatively in a Web-based, multicenter trial of burn care. This technique could be modified for other medical studies with visual endpoints. PMID:19212431
Use of standardized, quantitative digital photography in a multicenter Web-based study.
Molnar, Joseph A; Lew, Wesley K; Rapp, Derek A; Gordon, E Stanley; Voignier, Denise; Rushing, Scott; Willner, William
2009-01-01
We developed a Web-based, blinded, prospective, randomized, multicenter trial, using standardized digital photography to clinically evaluate hand burn depth and accurately determine wound area with digital planimetry. Photos in each center were taken with identical digital cameras with standardized settings on a custom backdrop developed at Wake Forest University containing a gray, white, black, and centimeter scale. The images were downloaded, transferred via the Web, and stored on servers at the principal investigator's home institution. Color adjustments to each photo were made using Adobe Photoshop 6.0 (Adobe, San Jose, Calif). In an initial pilot study, model hands marked with circles of known areas were used to determine the accuracy of the planimetry technique. Two-dimensional digital planimetry using SigmaScan Pro 5.0 (SPSS Science, Chicago, Ill) was used to calculate wound area from the digital images. Digital photography is a simple and cost-effective method for quantifying wound size when used in conjunction with digital planimetry (SigmaScan) and photo enhancement (Adobe Photoshop) programs. The accuracy of the SigmaScan program in calculating predetermined areas was within 4.7% (95% CI, 3.4%-5.9%). Dorsal hand burns of the initial 20 patients in a national study involving several centers were evaluated with this technique. Images obtained by individuals denying experience in photography proved reliable and useful for clinical evaluation and quantification of wound area. Standardized digital photography may be used quantitatively in a Web-based, multicenter trial of burn care. This technique could be modified for other medical studies with visual endpoints.
Mount Sharp Panorama in Raw Colors
2013-03-15
This mosaic of images from the Mastcam onboard NASA Mars rover Curiosity shows Mount Sharp in raw color. Raw color shows the scene colors as they would look in a typical smart-phone camera photo, before any adjustment.
Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.
2014-07-01
The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.
Bischoff, Guido; Böröcz, Zoltan; Proll, Christian; Kleinheinz, Johannes; von Bally, Gert; Dirksen, Dieter
2007-08-01
Optical topometric 3D sensors such as laser scanners and fringe projection systems allow detailed digital acquisition of human body surfaces. For many medical applications, however, not only the current shape is important, but also its changes, e.g., in the course of surgical treatment. In such cases, time delays of several months between subsequent measurements frequently occur. A modular 3D coordinate measuring system based on the fringe projection technique is presented that allows 3D coordinate acquisition including calibrated color information, as well as the detection and visualization of deviations between subsequent measurements. In addition, parameters describing the symmetry of body structures are determined. The quantitative results of the analysis may be used as a basis for objective documentation of surgical therapy. The system is designed in a modular way, and thus, depending on the object of investigation, two or three cameras with different capabilities in terms of resolution and color reproduction can be utilized to optimize the set-up.
Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation
Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.
2003-01-01
The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability to validate the radiometric and geometric calibration on Mars. Copyright 2003 by the American Geophysical Union.
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
Digital dental photography. Part 4: choosing a camera.
Ahmad, I
2009-06-13
With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.
2016-06-25
The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
Hayashida, Tetsuya; Iwasaki, Hiroaki; Masaoka, Kenichiro; Shimizu, Masanori; Yamashita, Takayuki; Iwai, Wataru
2017-06-26
We selected appropriate indices for color rendition and determined their recommended values for ultra-high-definition television (UHDTV) production using white LED lighting. Since the spectral sensitivities of UHDTV cameras can be designed to approximate the ideal spectral sensitivities of UHDTV colorimetry, they have more accurate color reproduction than HDTV cameras, and thus the color-rendering properties of the lighting are critical. Comparing images taken under white LEDs with conventional color rendering indices (R a , R 9-14 ) and recently proposed methods for evaluating color rendition of CQS, TM-30, Q a , and SSI, we found the combination of R a and R 9 appropriate. For white LED lighting, R a ≥ 90 and R 9 ≥ 80 are recommended for UHDTV production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsi, W; Lee, T; Schultz, T
Purpose: To evaluate the accuracy of a two-dimensional optical dosimeter on measuring lateral profiles for spots and scanned fields of proton pencil beams. Methods: A digital camera with a color image senor was utilized to image proton-induced scintillations on Gadolinium-oxysulfide phosphor reflected by a stainless-steel mirror. Intensities of three colors were summed for each pixel with proper spatial-resolution calibration. To benchmark this dosimeter, the field size and penumbra for 100mm square fields of singleenergy pencil-scan protons were measured and compared between this optical dosimeter and an ionization-chamber profiler. Sigma widths of proton spots in air were measured and compared betweenmore » this dosimeter and a commercial optical dosimeter. Clinical proton beams with ranges between 80 mm and 300 mm at CDH proton center were used for this benchmark. Results: Pixel resolutions vary 1.5% between two perpendicular axes. For a pencil-scan field with 302 mm range, measured field sizes and penumbras between two detection systems agreed to 0.5 mm and 0.3 mm, respectively. Sigma widths agree to 0.3 mm between two optical dosimeters for a proton spot with 158 mm range; having widths of 5.76 mm and 5.92 mm for X and Y axes, respectively. Similar agreements were obtained for others beam ranges. This dosimeter was successfully utilizing on mapping the shapes and sizes of proton spots at the technical acceptance of McLaren proton therapy system. Snow-flake spots seen on images indicated the image sensor having pixels damaged by radiations. Minor variations in intensity between different colors were observed. Conclusions: The accuracy of our dosimeter was in good agreement with other established devices in measuring lateral profiles of pencil-scan fields and proton spots. A precise docking mechanism for camera was designed to keep aligned optical path while replacing damaged image senor. Causes for minor variations between emitted color lights will be investigated.« less
78 FR 17875 - Commercial Driver's License Testing and Commercial Learner's Permit Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... Section 383.153(b)(1) prohibits States from placing a digital color image or photograph or black and white... petitions requesting reconsideration on the grounds that prohibiting the inclusion of a digital color image... include a digital color image or photograph or black and white laser engraved photograph or other visual...
NASA Astrophysics Data System (ADS)
Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.
2009-12-01
Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.
NASA Astrophysics Data System (ADS)
Tabuchi, Toru; Yamagata, Shigeki; Tamura, Tetsuo
2003-04-01
There are increasing demands for information to avoid accident in automobile traffic increase. We will discuss that an infrared camera can identify three conditions (dry, aquaplane, frozen) of the road surface. Principles of this method are; 1.We have found 3-color infrared camera can distinguish those conditions using proper data processing 2.The emissivity of the materials on the road surface (conclete, water, ice) differs in three wavelength regions. 3.The sky's temperature is lower than the road's. The emissivity of the road depends on the road surface conditions. Therefore, 3-color infrared camera measure the energy reflected from the sky on the road surface and self radiation of road surface. The road condition can be distinguished by processing the energy pattern measured in three wavelength regions. We were able to collect the experimental results that the emissivity of conclete is differ from water. The infrared camera whose NETD (Noise Equivalent Temperature Difference) at each 3-wavelength is 1.0C or less can distinguish the road conditions by using emissivity difference.
Identification of vegetable diseases using neural network
NASA Astrophysics Data System (ADS)
Zhang, Jiacai; Tang, Jianjun; Li, Yao
2007-02-01
Vegetables are widely planted all over China, but they often suffer from the some diseases. A method of major technical and economical importance is introduced in this paper, which explores the feasibility of implementing fast and reliable automatic identification of vegetable diseases and their infection grades from color and morphological features of leaves. Firstly, leaves are plucked from clustered plant and pictures of the leaves are taken with a CCD digital color camera. Secondly, color and morphological characteristics are obtained by standard image processing techniques, for examples, Otsu thresholding method segments the region of interest, image opening following closing algorithm removes noise, Principal Components Analysis reduces the dimension of the original features. Then, a recently proposed boosting algorithm AdaBoost. M2 is applied to RBF networks for diseases classification based on the above features, where the kernel function of RBF networks is Gaussian form with argument taking Euclidean distance of the input vector from a center. Our experiment performs on the database collected by Chinese Academy of Agricultural Sciences, and result shows that Boosting RBF Networks classifies the 230 cucumber leaves into 2 different diseases (downy-mildew and angular-leaf-spot), and identifies the infection grades of each disease according to the infection degrees.
Hyperspectral imaging-based credit card verifier structure with adaptive learning.
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2008-12-10
We propose and experimentally demonstrate a hyperspectral imaging-based optical structure for verifying a credit card. Our key idea comes from the fact that the fine detail of the embossed hologram stamped on the credit card is hard to duplicate, and therefore its key color features can be used for distinguishing between the real and counterfeit ones. As the embossed hologram is a diffractive optical element, we shine a number of broadband light sources one at a time, each at a different incident angle, on the embossed hologram of the credit card in such a way that different color spectra per incident angle beam are diffracted and separated in space. In this way, the center of mass of the histogram on each color plane is investigated by using a feed-forward backpropagation neural-network configuration. Our experimental demonstration using two off-the-shelf broadband white light emitting diodes, one digital camera, and a three-layer neural network can effectively identify 38 genuine and 109 counterfeit credit cards with false rejection rates of 5.26% and 0.92%, respectively. Key features include low cost, simplicity, no moving parts, no need of an additional decoding key, and adaptive learning.
Designation and verification of road markings detection and guidance method
NASA Astrophysics Data System (ADS)
Wang, Runze; Jian, Yabin; Li, Xiyuan; Shang, Yonghong; Wang, Jing; Zhang, JingChuan
2018-01-01
With the rapid development of China's space industry, digitization and intelligent is the tendency of the future. This report is present a foundation research about guidance system which based on the HSV color space. With the help of these research which will help to design the automatic navigation and parking system for the frock transport car and the infrared lamp homogeneity intelligent test equipment. The drive mode, steer mode as well as the navigation method was selected. In consideration of the practicability, it was determined to use the front-wheel-steering chassis. The steering mechanism was controlled by the stepping motors, and it is guided by Machine Vision. The optimization and calibration of the steering mechanism was made. A mathematical model was built and the objective functions was constructed for the steering mechanism. The extraction method of the steering line was studied and the motion controller was designed and optimized. The theory of HSV, RGB color space and analysis of the testing result will be discussed Using the function library OPENCV on the Linux system to fulfill the camera calibration. Based on the HSV color space to design the guidance algorithm.
Coincidence ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen
2014-12-01
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.
Sackstein, M
2006-10-01
Over the last five years digital photography has become ubiquitous. For the family photo album, a 4 or 5 megapixel camera costing about 2000 NIS will produce satisfactory results for most people. However, for intra-oral photography the common wisdom holds that only professional photographic equipment is up to the task. Such equipment typically costs around 12,000 NIS and includes the camera body, an attachable macro lens and a ringflash. The following article challenges this conception. Although professional equipment does produce the most exemplary results, a highly effective database of clinical pictures can be compiled even with a "non-professional" digital camera. Since the year 2002, my clinical work has been routinely documented with digital cameras of the Nikon CoolPix series. The advantages are that these digicams are economical both in price and in size and allow easy transport and operation when compared to their expensive and bulky professional counterparts. The details of how to use a non-professional digicam to produce and maintain an effective clinical picture database, for documentation, monitoring, demonstration and professional fulfillment, are described below.
NASA Technical Reports Server (NTRS)
Mcewen, Alfred S.; Duck, B.; Edwards, Kathleen
1991-01-01
A high resolution controlled mosaic of the hemisphere of Io centered on longitude 310 degrees is produced. Digital cartographic techniques were employed. Approximately 80 Voyager 1 clear and blue filter frames were utilized. This mosaic was merged with low-resolution color images. This dataset is compared to the geologic map of this region. Passage of the Voyager spacecraft through the Io plasma torus during acquisition of the highest resolution images exposed the vidicon detectors to ionized radiation, resulting in dark-current buildup on the vidicon. Because the vidicon is scanned from top to bottom, more charge accumulated toward the bottom of the frames, and the additive error increases from top to bottom as a ramp function. This ramp function was removed by using a model. Photometric normalizations were applied using the Minnaert function. An attempt to use Hapke's photometric function revealed that this function does not adequately describe Io's limb darkening at emission angles greater than 80 degrees. In contrast, the Minnaert function accurately describes the limb darkening up to emission angles of about 89 degrees. The improved set of discrete camera angles derived from this effort will be used in conjunction with the space telemetry pointing history file (the IPPS file), corrected on 4 or 12 second intervals to derive a revised time history for the pointing of the Infrared Interferometric Spectrometer (IRIS). For IRIS observations acquired between camera shutterings, the IPPS file can be corrected by linear interpolation, provided that the spacecraft motions were continuous. Image areas corresponding to the fields of view of IRIS spectra acquired between camera shutterings will be extracted from the mosaic to place the IRIS observations and hotspot models into geologic context.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-02-01
A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
ERIC Educational Resources Information Center
Rowe, Deborah Wells; Miller, Mary E.
2016-01-01
This paper reports the findings of a two-year design study exploring instructional conditions supporting emerging, bilingual/biliterate, four-year-olds' digital composing. With adult support, children used child-friendly, digital cameras and iPads equipped with writing, drawing and bookmaking apps to compose multimodal, multilingual eBooks…
ERIC Educational Resources Information Center
Hoge, Robert Joaquin
2010-01-01
Within the sphere of education, navigating throughout a digital world has become a matter of necessity for the developing professional, as with the advent of Document Camera Technology (DCT). This study explores the pedagogical implications of implementing DCT; to see if there is a relationship between teachers' comfort with DCT and to the…
Digital Diversity: A Basic Tool with Lots of Uses
ERIC Educational Resources Information Center
Coy, Mary
2006-01-01
In this article the author relates how the digital camera has altered the way she teaches and the way her students learn. She also emphasizes the importance for teachers to have software that can edit, print, and incorporate photos. She cites several instances in which a digital camera can be used: (1) PowerPoint presentations; (2) Open house; (3)…
Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders.
Tapia-McClung, Horacio; Ajuria Ibarra, Helena; Rao, Dinesh
2016-01-01
Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology.
Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders
Ajuria Ibarra, Helena; Rao, Dinesh
2016-01-01
Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology. PMID:27902724
NASA Astrophysics Data System (ADS)
Suzuki, H.; Yamashita, R.
2017-12-01
It is important to quantify amplitude of turbulent motion to understand the energy and momentum budgets and distribution of minor constituents in the upper mesosphere. In particular, to know the eddy diffusion coefficient of minor constituents which are locally and impulsively produced by energetic particle precipitations in the polar mesopause is one of the most important subjects in the upper atmospheric science. One of the straight methods to know the amplitude of the eddy motion is to measure the wind field with both spatial and temporal domain. However, observation technique satisfying such requirements is limited in this region. In this study, derivation of the horizontal wind field in the polar mesopause region by tracking the motion of noctilucent clouds (NLCs) is performed. NLC is the highest cloud in the Earth which appears in a mesopause region during summer season in both polar regions. Since the vertical structure of the NLC is sufficiently thin ( within several hundred meters in typical), the apparent horizontal motion observed from ground can be regarded as the result of transportation by the horizontal winds at a single altitude. In this presentation, initial results of wind field derivation by tracking a motion of noctilucent clouds (NLC) observed by a ground-based color digital camera in Iceland is reported. The procedure for wind field estimation consists with 3 steps; (1) projects raw images to a geographical map (2) enhances NLC structures by using FFT method (3) determines horizontal velocity vectors by applying template matching method to two sequential images. In this talk, a result of the wind derivation by using successive images of NLC with 3 minutes interval and 1.5h duration observed on the night of Aug 1st, 2013 will be reported as a case study.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.
2000-01-01
This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.
COUGHLAN, JAMES; MANDUCHI, ROBERTO
2009-01-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. PMID:19960101
Coughlan, James; Manduchi, Roberto
2009-06-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.
New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera
NASA Astrophysics Data System (ADS)
Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo
1994-11-01
An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
Integrating TV/digital data spectrograph system
NASA Technical Reports Server (NTRS)
Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.
1975-01-01
A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.
Printed products for digital cameras and mobile devices
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Schmidt-Sacht, Wulf
2005-01-01
Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
Facial skin color measurement based on camera colorimetric characterization
NASA Astrophysics Data System (ADS)
Yang, Boquan; Zhou, Changhe; Wang, Shaoqing; Fan, Xin; Li, Chao
2016-10-01
The objective measurement of facial skin color and its variance is of great significance as much information can be obtained from it. In this paper, we developed a new skin color measurement procedure which includes following parts: first, a new skin tone color checker made of pantone Skin Tone Color Checker was designed for camera colorimetric characterization; second, the chromaticity of light source was estimated via a new scene illumination estimation method considering several previous algorithms; third, chromatic adaption was used to convert the input facial image into output facial image which appears taken under canonical light; finally the validity and accuracy of our method was verified by comparing the results obtained by our procedure with these by spectrophotometer.
Modeling of digital information optical encryption system with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.
2015-10-01
State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
An instructional guide for leaf color analysis using digital imaging software
Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg
2005-01-01
Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Namba, Masakazu; Watabe, Toshihisa; Ohtake, Hiroshi; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Nitta, Hiroshi; Hirao, Takashi
2010-01-01
Our group has been developing a new type of image sensor overlaid with three organic photoconductive films, which are individually sensitive to only one of the primary color components (blue (B), green (G), or red (R) light), with the aim of developing a compact, high resolution color camera without any color separation optical systems. In this paper, we firstly revealed the unique characteristics of organic photoconductive films. Only choosing organic materials can tune the photoconductive properties of the film, especially excellent wavelength selectivities which are good enough to divide the incident light into three primary colors. Color separation with vertically stacked organic films was also shown. In addition, the high-resolution of organic photoconductive films sufficient for high-definition television (HDTV) was confirmed in a shooting experiment using a camera tube. Secondly, as a step toward our goal, we fabricated a stacked organic image sensor with G- and R-sensitive organic photoconductive films, each of which had a zinc oxide (ZnO) thin film transistor (TFT) readout circuit, and demonstrated image pickup at a TV frame rate. A color image with a resolution corresponding to the pixel number of the ZnO TFT readout circuit was obtained from the stacked image sensor. These results show the potential for the development of high-resolution prism-less color cameras with stacked organic photoconductive films.
Hand and goods judgment algorithm based on depth information
NASA Astrophysics Data System (ADS)
Li, Mingzhu; Zhang, Jinsong; Yan, Dan; Wang, Qin; Zhang, Ruiqi; Han, Jing
2016-03-01
A tablet computer with a depth camera and a color camera is loaded on a traditional shopping cart. The inside information of the shopping cart is obtained by two cameras. In the shopping cart monitoring field, it is very important for us to determine whether the customer with goods in or out of the shopping cart. This paper establishes a basic framework for judging empty hand, it includes the hand extraction process based on the depth information, process of skin color model building based on WPCA (Weighted Principal Component Analysis), an algorithm for judging handheld products based on motion and skin color information, statistical process. Through this framework, the first step can ensure the integrity of the hand information, and effectively avoids the influence of sleeve and other debris, the second step can accurately extract skin color and eliminate the similar color interference, light has little effect on its results, it has the advantages of fast computation speed and high efficiency, and the third step has the advantage of greatly reducing the noise interference and improving the accuracy.
Multimodal digital color imaging system for facial skin lesion analysis
NASA Astrophysics Data System (ADS)
Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo
2008-02-01
In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.
ERIC Educational Resources Information Center
Northcote, Maria
2011-01-01
Digital cameras are now commonplace in many classrooms and in the lives of many children in early childhood centres and primary schools. They are regularly used by adults and teachers for "saving special moments and documenting experiences." The use of previously expensive photographic and recording equipment has often remained in the domain of…
A stereoscopic lens for digital cinema cameras
NASA Astrophysics Data System (ADS)
Lipton, Lenny; Rupkalvis, John
2015-03-01
Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
A direct-view customer-oriented digital holographic camera
NASA Astrophysics Data System (ADS)
Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.
2018-01-01
In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Makhambet Crater - False Color
2015-01-29
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows Makhambet Crater.
Color Camera for Curiosity Robotic Arm
2010-11-16
The Mars Hand Lens Imager MAHLI camera will fly on NASA Mars Science Laboratory mission, launching in late 2011. This photo of the camera was taken before MAHLI November 2010 installation onto the robotic arm of the mission Mars rover, Curiosity.
Calibration View of Earth and the Moon by Mars Color Imager
NASA Technical Reports Server (NTRS)
2005-01-01
Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils. The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results. The Earth and Moon images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to the Moon was about 1,440,000 kilometers (about 895,000 miles); the range to Earth was about 1,170,000 kilometers (about 727,000 miles). This view combines a sequence of frames showing the passage of Earth and the Moon across the field of view of a single color band of the Mars Color Imager. As the spacecraft slewed to view the two objects, they passed through the camera's field of view. Earth has been saturated white in this image so that both Earth and the Moon can be seen in the same frame. The Sun was coming from the left, so Earth and the Moon are seen in a quarter phase. Earth is on the left. The Moon appears briefly on the right. The Moon fades in and out; the Moon is only one pixel in size, and its fading is an artifact of the size and configuration of the light-sensitive pixels of the camera's charge-coupled device (CCD) detector.Wagner, John H; Miskelly, Gordon M
2003-05-01
The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.
A nondestructive method to estimate the chlorophyll content of Arabidopsis seedlings
Liang, Ying; Urano, Daisuke; Liao, Kang-Ling; ...
2017-04-14
Chlorophyll content decreases in plants under stress conditions, therefore it is used commonly as an indicator of plant health. Arabidopsis thaliana offers a convenient and fast way to test physiological phenotypes of mutations and treatments. But, chlorophyll measurements with conventional solvent extraction are not applicable to Arabidopsis leaves due to their small size, especially when grown on culture dishes. We provide a nondestructive method for chlorophyll measurement whereby the red, green and blue (RGB) values of a color leaf image is used to estimate the chlorophyll content from Arabidopsis leaves. The method accommodates different profiles of digital cameras by incorporatingmore » the ColorChecker chart to make the digital negative profiles, to adjust the white balance, and to calibrate the exposure rate differences caused by the environment so that this method is applicable in any environment. We chose an exponential function model to estimate chlorophyll content from the RGB values, and fitted the model parameters with physical measurements of chlorophyll contents. As further proof of utility, this method was used to estimate chlorophyll content of G protein mutants grown on different sugar to nitrogen ratios. Our method is a simple, fast, inexpensive, and nondestructive estimation of chlorophyll content of Arabidopsis seedlings. This method lead to the discovery that G proteins are important in sensing the C/N balance to control chlorophyll content in Arabidopsis.« less
49 CFR 384.227 - Record of digital image or photograph.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 5 2014-10-01 2014-10-01 false Record of digital image or photograph. 384.227... § 384.227 Record of digital image or photograph. The State must: (a) Record the digital color image or.... The digital color image or photograph or black and white laser engraved photograph must either be made...
49 CFR 384.227 - Record of digital image or photograph.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Record of digital image or photograph. 384.227... § 384.227 Record of digital image or photograph. The State must: (a) Record the digital color image or.... The digital color image or photograph or black and white laser engraved photograph must either be made...
49 CFR 384.227 - Record of digital image or photograph.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Record of digital image or photograph. 384.227... § 384.227 Record of digital image or photograph. The State must: (a) Record the digital color image or.... The digital color image or photograph or black and white laser engraved photograph must either be made...
49 CFR 384.227 - Record of digital image or photograph.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Record of digital image or photograph. 384.227... § 384.227 Record of digital image or photograph. The State must: (a) Record the digital color image or.... The digital color image or photograph or black and white laser engraved photograph must either be made...
Frequency division multiplexed multi-color fluorescence microscope system
NASA Astrophysics Data System (ADS)
Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan
2017-10-01
Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
Meteor Film Recording with Digital Film Cameras with large CMOS Sensors
NASA Astrophysics Data System (ADS)
Slansky, P. C.
2016-12-01
In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.
Studying Mixing in Non-Newtonian Blue Maize Flour Suspensions Using Color Analysis
Trujillo-de Santiago, Grissel; Rojas-de Gante, Cecilia; García-Lara, Silverio; Ballescá-Estrada, Adriana; Alvarez, Mario Moisés
2014-01-01
Background Non-Newtonian fluids occur in many relevant flow and mixing scenarios at the lab and industrial scale. The addition of acid or basic solutions to a non-Newtonian fluid is not an infrequent operation, particularly in Biotechnology applications where the pH of Non-Newtonian culture broths is usually regulated using this strategy. Methodology and Findings We conducted mixing experiments in agitated vessels using Non-Newtonian blue maize flour suspensions. Acid or basic pulses were injected to reveal mixing patterns and flow structures and to follow their time evolution. No foreign pH indicator was used as blue maize flours naturally contain anthocyanins that act as a native, wide spectrum, pH indicator. We describe a novel method to quantitate mixedness and mixing evolution through Dynamic Color Analysis (DCA) in this system. Color readings corresponding to different times and locations within the mixing vessel were taken with a digital camera (or a colorimeter) and translated to the CIELab scale of colors. We use distances in the Lab space, a 3D color space, between a particular mixing state and the final mixing point to characterize segregation/mixing in the system. Conclusion and Relevance Blue maize suspensions represent an adequate and flexible model to study mixing (and fluid mechanics in general) in Non-Newtonian suspensions using acid/base tracer injections. Simple strategies based on the evaluation of color distances in the CIELab space (or other scales such as HSB) can be adapted to characterize mixedness and mixing evolution in experiments using blue maize suspensions. PMID:25401332
Silva, Paolo S; Walia, Saloni; Cavallerano, Jerry D; Sun, Jennifer K; Dunn, Cheri; Bursell, Sven-Erik; Aiello, Lloyd M; Aiello, Lloyd Paul
2012-09-01
To compare agreement between diagnosis of clinical level of diabetic retinopathy (DR) and diabetic macular edema (DME) derived from nonmydriatic fundus images using a digital camera back optimized for low-flash image capture (MegaVision) compared with standard seven-field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs and dilated clinical examination. Subject comfort and image acquisition time were also evaluated. In total, 126 eyes from 67 subjects with diabetes underwent Joslin Vision Network nonmydriatic retinal imaging. ETDRS photographs were obtained after pupillary dilation, and fundus examination was performed by a retina specialist. There was near-perfect agreement between MegaVision and ETDRS photographs (κ=0.81, 95% confidence interval [CI] 0.73-0.89) for clinical DR severity levels. Substantial agreement was observed with clinical examination (κ=0.71, 95% CI 0.62-0.80). For DME severity level there was near-perfect agreement with ETDRS photographs (κ=0.92, 95% CI 0.87-0.98) and moderate agreement with clinical examination (κ=0.58, 95% CI 0.46-0.71). The wider MegaVision 45° field led to identification of nonproliferative changes in areas not imaged by the 30° field of ETDRS photos. Field area unique to ETDRS photographs identified proliferative changes not visualized with MegaVision. Mean MegaVision acquisition time was 9:52 min. After imaging, 60% of subjects preferred the MegaVision lower flash settings. When evaluated using a rigorous protocol, images captured using a low-light digital camera compared favorably with ETDRS photography and clinical examination for grading level of DR and DME. Furthermore, these data suggest the importance of more extensive peripheral images and suggest that utilization of wide-field retinal imaging may further improve accuracy of DR assessment.
Forensics for flatbed scanners
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Franz, Elke; Winkler, Antje
2007-02-01
Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.
2015-01-15
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Renaudot Crater.
2015-01-12
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Granicus Valles.
2014-12-25
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Candor Labes.
2015-01-08
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.
Schaeberle Crater - False Color
2015-01-26
The THEMIS VIS camera contains 5 filters. The data from different filters can create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of the floor of Schaeberle Crater, including small dunes.
2015-01-30
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows windstreaks in Daedalia Planum.
2015-01-02
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Nili Patera.
2014-12-23
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Atlantis Chaos.
2015-01-01
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.
Hargraves Crater - False Color
2015-01-13
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Hargraves Crater.
2014-12-18
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of Reull Vallis.
Commercial and industrial applications of color ink jet: a technological perspective
NASA Astrophysics Data System (ADS)
Dunand, Alain
1996-03-01
In just 5 years, color ink-jet has become the dominant technology for printing color images and graphics in the office and home markets. In commercial printing, the traditional printing processes are being influenced by new digital techniques. Color ink-jet proofing, and concepts such as computer to film/plate or digital processes are contributing to the evolution of the industry. In industrial color printing, the penetration of digital techniques is just beginning. All widely used conventional contact printing technologies involve mechanical printing forms including plates, screens or engraved cylinders. Such forms, which need to be newly created and set up for each job, increase costs. In our era of fast changing customer demands, growing needs for customization, and increasing use of digital exchange of information, the commercial and industrial printing markets represent an enormous potential for digital printing technologies. The adoption characteristics for the use of color ink-jet in these industries are discussed. Examples of color ink-jet applications in the fields of billboard printing, floor/wall covering decoration, and textile printing are described. The requirements on print quality, productivity, reliability, substrate compatibility, and color lead to the consideration of various types of ink-jet technologies. Key technical enabling factors and directions for future improvements are presented.
Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam
2017-10-01
A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.
NASA Astrophysics Data System (ADS)
Shafique, Md. Ishraque Bin; Razzaq Halim, M. A.; Rabbi, Fazle; Khalilur Rhaman, Md.
2016-07-01
For a third world country like Bangladesh, satellite and space research is not feasible due to lack of funding. Therefore, in order to imitate the principles of such a satellite Balloon Satellite can easily and inexpensively be setup. Balloon satellites are miniature satellites, which are cheap and easy to construct. This paper discusses a BalloonSat developed using a Raspberry Pi, IMU module, UV sensor, GPS module, Camera and XBee Module. An interactive GUI was designed to display all the data collected after processing. To understand nitrogen concentration of a plant, a leaf color chart is used. This paper attempts to digitalize this process, which is applied on photos taken by the BallonSat.
Target recognitions in multiple-camera closed-circuit television using color constancy
NASA Astrophysics Data System (ADS)
Soori, Umair; Yuen, Peter; Han, Ji Wen; Ibrahim, Izzati; Chen, Wentao; Hong, Kan; Merfort, Christian; James, David; Richardson, Mark
2013-04-01
People tracking in crowded scenes from closed-circuit television (CCTV) footage has been a popular and challenging task in computer vision. Due to the limited spatial resolution in the CCTV footage, the color of people's dress may offer an alternative feature for their recognition and tracking. However, there are many factors, such as variable illumination conditions, viewing angles, and camera calibration, that may induce illusive modification of intrinsic color signatures of the target. Our objective is to recognize and track targets in multiple camera views using color as the detection feature, and to understand if a color constancy (CC) approach may help to reduce these color illusions due to illumination and camera artifacts and thereby improve target recognition performance. We have tested a number of CC algorithms using various color descriptors to assess the efficiency of target recognition from a real multicamera Imagery Library for Intelligent Detection Systems (i-LIDS) data set. Various classifiers have been used for target detection, and the figure of merit to assess the efficiency of target recognition is achieved through the area under the receiver operating characteristics (AUROC). We have proposed two modifications of luminance-based CC algorithms: one with a color transfer mechanism and the other using a pixel-wise sigmoid function for an adaptive dynamic range compression, a method termed enhanced luminance reflectance CC (ELRCC). We found that both algorithms improve the efficiency of target recognitions substantially better than that of the raw data without CC treatment, and in some cases the ELRCC improves target tracking by over 100% within the AUROC assessment metric. The performance of the ELRCC has been assessed over 10 selected targets from three different camera views of the i-LIDS footage, and the averaged target recognition efficiency over all these targets is found to be improved by about 54% in AUROC after the data are processed by the proposed ELRCC algorithm. This amount of improvement represents a reduction of probability of false alarm by about a factor of 5 at the probability of detection of 0.5. Our study concerns mainly the detection of colored targets; and issues for the recognition of white or gray targets will be addressed in a forthcoming study.
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
An assessment of the utility of a non-metric digital camera for measuring standing trees
Neil Clark; Randolph H. Wynne; Daniel L. Schmoldt; Matthew F. Winn
2000-01-01
Images acquired with a commercially available digital camera were used to make measurements on 20 red oak (Quercus spp.) stems. The ranges of diameter at breast height (DBH) and height to a 10 cm upper-stem diameter were 16-66 cm and 12-20 m, respectively. Camera stations located 3, 6, 9, 12, and 15 m from the stem were studied to determine the best distance to be...
Data-nonintrusive photonics-based credit card verifier with a low false rejection rate.
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2010-02-10
We propose and experimentally demonstrate a noninvasive credit card verifier with a low false rejection rate (FRR). Our key idea is based on the use of three broadband light sources in our data-nonintrusive photonics-based credit card verifier structure, where spectral components of the embossed hologram images are registered as red, green, and blue. In this case, nine distinguishable variables are generated for a feed-forward neural network (FFNN). In addition, we investigate the center of mass of the image histogram projected onto the x axis (I(color)), making our system more tolerant of the intensity fluctuation of the light source. We also reduce the unwanted signals on each hologram image by simply dividing the hologram image into three zones and then calculating their corresponding I(color) values for red, green, and blue bands. With our proposed concepts, we implement our field test prototype in which three broadband white light light-emitting diodes (LEDs), a two-dimensional digital color camera, and a four-layer FFNN are used. Based on 249 genuine credit cards and 258 counterfeit credit cards, we find that the average of differences in I(color) values between genuine and counterfeit credit cards is improved by 1.5 times and up to 13.7 times. In this case, we can effectively verify credit cards with a very low FRR of 0.79%.
Estimation of spectral distribution of sky radiance using a commercial digital camera.
Saito, Masanori; Iwabuchi, Hironobu; Murata, Isao
2016-01-10
Methods for estimating spectral distribution of sky radiance from images captured by a digital camera and for accurately estimating spectral responses of the camera are proposed. Spectral distribution of sky radiance is represented as a polynomial of the wavelength, with coefficients obtained from digital RGB counts by linear transformation. The spectral distribution of radiance as measured is consistent with that obtained by spectrometer and radiative transfer simulation for wavelengths of 430-680 nm, with standard deviation below 1%. Preliminary applications suggest this method is useful for detecting clouds and studying the relation between irradiance at the ground and cloud distribution.
Coincidence ion imaging with a fast frame camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei
2014-12-15
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less
Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications
2013-06-01
Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
Intelligent keyframe extraction for video printing
NASA Astrophysics Data System (ADS)
Zhang, Tong
2004-10-01
Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.
Digital Camera Project Fosters Communication Skills
ERIC Educational Resources Information Center
Fisher, Ashley; Lazaros, Edward J.
2009-01-01
This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…
NASA Technical Reports Server (NTRS)
Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.
1973-01-01
The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.
Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-07-28
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.
Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera
Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin
2015-01-01
This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982
View of Island of Kyushu, Japan from Skylab
1974-01-07
SL4-139-3942 (7 Jan. 1974) --- This oblique view of the Island of Kyushu, Japan, was taken from the Earth-orbiting Skylab space station on Jan. 8, 1974 during its third manning. A plume from the volcano Sakurajima (bottom center) is clearly seen as it extends about 80 kilometers (50 miles) east from the volcano. (EDITOR'S NOTE: On Jan. 10, 2013, a little over 39 years after this 1974 photo was made from the Skylab space station, Expedition 34 crew members aboard the International Space Station took a similar picture (frame no. ISS034-E-027139) featuring smoke rising from the same volcano, with much of the island of Kyushu visible. Interesting comparisons can be made between the two photos, at least as far as the devices used to record them. The Skylab image was made by one of the three Skylab 4 crew members with a hand-held camera using a 100-mm lens and 70-mm color film, whereas the station photo was taken with 180-mm lens on a digital still camera, hand-held by one of the six crew members). Photo credit: NASA
2014-12-31
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows part of of Ares Vallis.
2014-12-10
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. This false color image captured by NASA 2001 Mars Odyssey spacecraft shows part of Coprates Chasma.
2015-01-21
The THEMIS VIS camera contains 5 filters. The data from different filters can create a false color image. This false color image from NASA 2001 Mars Odyssey spacecraft shows small dunes of the floor of Capen Crater in Terra Sabea.