From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2017-12-08
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
DSCOVR Public Release Statement V02
Atmospheric Science Data Center
2017-07-06
... where it performs its primary objective of monitoring the solar wind as well as observing the Earth from sunrise to sunset with two Earth Science sensors: the Earth Polychromatic Imaging Camera (EPIC) and ...
DSCOVR EPIC L2 VESDR Data Release Announcement
Atmospheric Science Data Center
2018-06-14
... Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR). The VESDR product contains Leaf Area Index (LAI) ... FPAR, LAI, SLAI are useful for monitoring variability and change in global vegetation due to climate and anthropogenic influences, ...
DSCOVR EPIC L2 VESDR Data Release Announcement
Atmospheric Science Data Center
2018-06-07
... Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR). The VESDR product contains Leaf Area Index (LAI) ... FPAR, LAI, SLAI are useful for monitoring variability and change in global vegetation due to climate and anthropogenic influences, ...
Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)
NASA Technical Reports Server (NTRS)
Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie
2011-01-01
The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.
Imaging spectrometer/camera having convex grating
NASA Technical Reports Server (NTRS)
Reininger, Francis M. (Inventor)
2000-01-01
An imaging spectrometer has fore-optics coupled to a spectral resolving system with an entrance slit extending in a first direction at an imaging location of the fore-optics for receiving the image, a convex diffraction grating for separating the image into a plurality of spectra of predetermined wavelength ranges; a spectrometer array for detecting the spectra; and at least one concave sperical mirror concentric with the diffraction grating for relaying the image from the entrance slit to the diffraction grating and from the diffraction grating to the spectrometer array. In one embodiment, the spectrometer is configured in a lateral mode in which the entrance slit and the spectrometer array are displaced laterally on opposite sides of the diffraction grating in a second direction substantially perpendicular to the first direction. In another embodiment, the spectrometer is combined with a polychromatic imaging camera array disposed adjacent said entrance slit for recording said image.
Focal plane wavefront sensor achromatization: The multireference self-coherent camera
NASA Astrophysics Data System (ADS)
Delorme, J. R.; Galicher, R.; Baudoz, P.; Rousset, G.; Mazoyer, J.; Dupuis, O.
2016-04-01
Context. High contrast imaging and spectroscopy provide unique constraints for exoplanet formation models as well as for planetary atmosphere models. But this can be challenging because of the planet-to-star small angular separation (<1 arcsec) and high flux ratio (>105). Recently, optimized instruments like VLT/SPHERE and Gemini/GPI were installed on 8m-class telescopes. These will probe young gazeous exoplanets at large separations (≳1 au) but, because of uncalibrated phase and amplitude aberrations that induce speckles in the coronagraphic images, they are not able to detect older and fainter planets. Aims: There are always aberrations that are slowly evolving in time. They create quasi-static speckles that cannot be calibrated a posteriori with sufficient accuracy. An active correction of these speckles is thus needed to reach very high contrast levels (>106-107). This requires a focal plane wavefront sensor. Our team proposed a self coherent camera, the performance of which was demonstrated in the laboratory. As for all focal plane wavefront sensors, these are sensitive to chromatism and we propose an upgrade that mitigates the chromatism effects. Methods: First, we recall the principle of the self-coherent camera and we explain its limitations in polychromatic light. Then, we present and numerically study two upgrades to mitigate chromatism effects: the optical path difference method and the multireference self-coherent camera. Finally, we present laboratory tests of the latter solution. Results: We demonstrate in the laboratory that the multireference self-coherent camera can be used as a focal plane wavefront sensor in polychromatic light using an 80 nm bandwidth at 640 nm (bandwidth of 12.5%). We reach a performance that is close to the chromatic limitations of our bench: 1σ contrast of 4.5 × 10-8 between 5 and 17 λ0/D. Conclusions: The performance of the MRSCC is promising for future high-contrast imaging instruments that aim to actively minimize the speckle intensity so as to detect and spectrally characterize faint old or light gaseous planets.
NASA Captures 'EPIC' Earth Image
2017-12-08
A NASA camera on the Deep Space Climate Observatory satellite has returned its first view of the entire sunlit side of Earth from one million miles away. This color image of Earth was taken by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope. The image was generated by combining three separate images to create a photographic-quality image. The camera takes a series of 10 images using different narrowband filters -- from ultraviolet to near infrared -- to produce a variety of science products. The red, green and blue channel images are used in these color images. The image was taken July 6, 2015, showing North and Central America. The central turquoise areas are shallow seas around the Caribbean islands. This Earth image shows the effects of sunlight scattered by air molecules, giving the image a characteristic bluish tint. The EPIC team is working to remove this atmospheric effect from subsequent images. Once the instrument begins regular data acquisition, EPIC will provide a daily series of Earth images allowing for the first time study of daily variations over the entire globe. These images, available 12 to 36 hours after they are acquired, will be posted to a dedicated web page by September 2015. The primary objective of DSCOVR, a partnership between NASA, the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Air Force, is to maintain the nation’s real-time solar wind monitoring capabilities, which are critical to the accuracy and lead time of space weather alerts and forecasts from NOAA. For more information about DSCOVR, visit: www.nesdis.noaa.gov/DSCOVR/
Deep Space Detection of Oriented Ice Crystals
NASA Astrophysics Data System (ADS)
Marshak, A.; Varnai, T.; Kostinski, A. B.
2017-12-01
The deep space climate observatory (DSCOVR) spacecraft resides at the first Lagrangian point about one million miles from Earth. A polychromatic imaging camera onboard delivers nearly hourly observations of the entire sun-lit face of the Earth. Many images contain unexpected bright flashes of light over both ocean and land. We constructed a yearlong time series of flash latitudes, scattering angles and oxygen absorption to demonstrate conclusively that the flashes over land are specular reflections off tiny ice crystals floating in the air nearly horizontally. Such deep space detection of tropospheric ice can be used to constrain the likelihood of oriented crystals and their contribution to Earth albedo.
NASA Astrophysics Data System (ADS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-04-01
This paper presents an investigation of the expected uncertainties of a single-channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC Sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single-channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single-channel COT retrieval is feasible for EPIC. For ice clouds, single-channel retrieval errors are minimal (< 2 %) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 %, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
An iterative method for near-field Fresnel region polychromatic phase contrast imaging
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2017-07-01
We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.
Earth Reflectivity from Deep Space Climate Observatory (DSCOVR) Earth Polychromatic Camera (EPIC)
NASA Astrophysics Data System (ADS)
Song, W.; Knyazikhin, Y.; Wen, G.; Marshak, A.; Yan, G.; Mu, X.; Park, T.; Chen, C.; Xu, B.; Myneni, R. B.
2017-12-01
Earth reflectivity, which is also specified as Earth albedo or Earth reflectance, is defined as the fraction of incident solar radiation reflected back to space at the top of the atmosphere. It is a key climate parameter that describes climate forcing and associated response of the climate system. Satellite is one of the most efficient ways to measure earth reflectivity. Conventional polar orbit and geostationary satellites observe the Earth at a specific local solar time or monitor only a specific area of the Earth. For the first time, the NASA's Earth Polychromatic Imaging Camera (EPIC) onboard NOAA's Deep Space Climate Observatory (DSCOVR) collects simultaneously radiance data of the entire sunlit earth at 8 km resolution at nadir every 65 to 110 min. It provides reflectivity images in backscattering direction with the scattering angle between 168º and 176º at 10 narrow spectral bands in ultraviolet, visible, and near-Infrared (NIR) wavelengths. We estimate the Earth reflectivity using DSCOVR EPIC observations and analyze errors in Earth reflectivity due to sampling strategy of polar orbit Terra/Aqua MODIS and geostationary Goddard Earth Observing System-R series missions. We also provide estimates of contributions from ocean, clouds, land and vegetation to the Earth reflectivity. Graphic abstract shows enhanced RGB EPIC images of the Earth taken on July-24-2016 at 7:04GMT and 15:48 GMT. Parallel lines depict a 2330 km wide Aqua MODIS swath. The plot shows diurnal courses of mean Earth reflectance over the Aqua swath (triangles) and the entire image (circles). In this example the relative difference between the mean reflectances is +34% at 7:04GMT and -16% at 15:48 GMT. Corresponding daily averages are 0.256 (0.044) and 0.231 (0.025). The relative precision estimated as root mean square relative error is 17.9% in this example.
'EPIC' View of Africa and Europe from a Million Miles Away
2015-07-29
Africa is front and center in this image of Earth taken by a NASA camera on the Deep Space Climate Observatory (DSCOVR) satellite. The image, taken July 6 from a vantage point one million miles from Earth, was one of the first taken by NASA’s Earth Polychromatic Imaging Camera (EPIC). Central Europe is toward the top of the image with the Sahara Desert to the south, showing the Nile River flowing to the Mediterranean Sea through Egypt. The photographic-quality color image was generated by combining three separate images of the entire Earth taken a few minutes apart. The camera takes a series of 10 images using different narrowband filters -- from ultraviolet to near infrared -- to produce a variety of science products. The red, green and blue channel images are used in these Earth images. The DSCOVR mission is a partnership between NASA, the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Air Force, with the primary objective to maintain the nation’s real-time solar wind monitoring capabilities, which are critical to the accuracy and lead time of space weather alerts and forecasts from NOAA. DSCOVR was launched in February to its planned orbit at the first Lagrange point or L1, about one million miles from Earth toward the sun. It’s from that unique vantage point that the EPIC instrument is acquiring images of the entire sunlit face of Earth. Data from EPIC will be used to measure ozone and aerosol levels in Earth’s atmosphere, cloud height, vegetation properties and a variety of other features. Image Credit: NASA
NASA's EPIC View of 2017 Eclipse Across America
2017-08-22
From a million miles out in space, NASA’s Earth Polychromatic Imaging Camera (EPIC) captured natural color images of the moon’s shadow crossing over North America on Aug. 21, 2017. EPIC is aboard NOAA’s Deep Space Climate Observatory (DSCOVR), where it photographs the full sunlit side of Earth every day, giving it a unique view of total solar eclipses. EPIC normally takes about 20 to 22 images of Earth per day, so this animation appears to speed up the progression of the eclipse. To see the images of Earth every day, go to: epic.gsfc.nasa.gov NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
NASA Technical Reports Server (NTRS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (less than 2 percent) due to the particle- size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 percent, although for thin clouds (COT less than 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2018-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study. PMID:29619116
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
Terrestrial glint seen from deep space: Oriented ice crystals detected from the Lagrangian point
NASA Astrophysics Data System (ADS)
Marshak, Alexander; Várnai, Tamás.; Kostinski, Alexander
2017-05-01
The Deep Space Climate Observatory (DSCOVR) spacecraft resides at the first Lagrangian point about one million miles from Earth. A polychromatic imaging camera onboard delivers nearly hourly observations of the entire sunlit face of the Earth. Many images contain unexpected bright flashes of light over both ocean and land. We construct a yearlong time series of flash latitudes, scattering angles, and oxygen absorption to demonstrate conclusively that the flashes over land are specular reflections off tiny ice platelets floating in the air nearly horizontally. Such deep space detection of tropospheric ice can be used to constrain the likelihood of oriented crystals and their contribution to Earth albedo. These glint observations also support proposals for detecting starlight glints off faint companions in our search for habitable exoplanets.
Denoising of polychromatic CT images based on their own noise properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less
'EPIC' View of Africa and Europe from a Million Miles Away
2015-07-29
Africa is front and center in this image of Earth taken by a NASA camera on the Deep Space Climate Observatory (DSCOVR) satellite. The image, taken July 6 from a vantage point one million miles from Earth, was one of the first taken by NASA’s Earth Polychromatic Imaging Camera (EPIC). Central Europe is toward the top of the image with the Sahara Desert to the south, showing the Nile River flowing to the Mediterranean Sea through Egypt. The photographic-quality color image was generated by combining three separate images of the entire Earth taken a few minutes apart. The camera takes a series of 10 images using different narrowband filters -- from ultraviolet to near infrared -- to produce a variety of science products. The red, green and blue channel images are used in these Earth images. The DSCOVR mission is a partnership between NASA, the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Air Force, with the primary objective to maintain the nation’s real-time solar wind monitoring capabilities, which are critical to the accuracy and lead time of space weather alerts and forecasts from NOAA. DSCOVR was launched in February to its planned orbit at the first Lagrange point or L1, about one million miles from Earth toward the sun. It’s from that unique vantage point that the EPIC instrument is acquiring images of the entire sunlit face of Earth. Data from EPIC will be used to measure ozone and aerosol levels in Earth’s atmosphere, cloud height, vegetation properties and a variety of other features. Image Credit: NASA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Parham, Christopher A; Zhong, Zhong; Pisano, Etta; Connor, Jr., Dean M
2015-03-03
Systems and methods for detecting an image of an object using a multi-beam imaging system from an x-ray beam having a polychromatic energy distribution are disclosed. According to one aspect, a method can include generating a first X-ray beam having a polychromatic energy distribution. Further, the method can include positioning a plurality of monochromator crystals in a predetermined position to directly intercept the first X-ray beam such that a plurality of second X-ray beams having predetermined energy levels are produced. Further, an object can be positioned in the path of the second X-ray beams for transmission of the second X-ray beams through the object and emission from the object as transmitted X-ray beams. The transmitted X-ray beams can each be directed at an angle of incidence upon one or more crystal analyzers. Further, an image of the object can be detected from the beams diffracted from the analyzer crystals.
Simulations of x-ray speckle-based dark-field and phase-contrast imaging with a polychromatic beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdora, Marie-Christine, E-mail: marie-christine.zdora@diamond.ac.uk; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE; Department of Physics & Astronomy, University College London, London WC1E 6BT
2015-09-21
Following the first experimental demonstration of x-ray speckle-based multimodal imaging using a polychromatic beam [I. Zanette et al., Phys. Rev. Lett. 112(25), 253903 (2014)], we present a simulation study on the effects of a polychromatic x-ray spectrum on the performance of this technique. We observe that the contrast of the near-field speckles is only mildly influenced by the bandwidth of the energy spectrum. Moreover, using a homogeneous object with simple geometry, we characterize the beam hardening artifacts in the reconstructed transmission and refraction angle images, and we describe how the beam hardening also affects the dark-field signal provided by specklemore » tracking. This study is particularly important for further implementations and developments of coherent speckle-based techniques at laboratory x-ray sources.« less
Retrieving Smoke Aerosol Height from DSCOVR/EPIC
NASA Astrophysics Data System (ADS)
Xu, X.; Wang, J.; Wang, Y.
2017-12-01
Unlike industrial pollutant particles that are often confined within the planetary boundary layer, smoke from forest and agriculture fires can inject massive carbonaceous aerosols into the upper troposphere due to the intense pyro-convection. Sensitivity of weather and climate to absorbing carbonaceous aerosols is regulated by the altitude of those aerosol layers. However, aerosol height information remains limited from passive satellite sensors. Here we present an algorithm to estimate smoke aerosol height from radiances in the oxygen A and B bands measured by the Earth Polychromatic Imaging Camera (EPIC) from the Deep Space Climate Observatory (DSCOVR). With a suit of case studies and validation efforts, we demonstrate that smoke aerosol height can be well retrieved over both ocean and land surfaces multiple times daily.
Parham, Christopher; Zhong, Zhong; Pisano, Etta; Connor, Dean; Chapman, Leroy D.
2010-06-22
Systems and methods for detecting an image of an object using an X-ray beam having a polychromatic energy distribution are disclosed. According to one aspect, a method can include detecting an image of an object. The method can include generating a first X-ray beam having a polychromatic energy distribution. Further, the method can include positioning a single monochromator crystal in a predetermined position to directly intercept the first X-ray beam such that a second X-ray beam having a predetermined energy level is produced. Further, an object can be positioned in the path of the second X-ray beam for transmission of the second X-ray beam through the object and emission from the object as a transmitted X-ray beam. The transmitted X-ray beam can be directed at an angle of incidence upon a crystal analyzer. Further, an image of the object can be detected from a beam diffracted from the analyzer crystal.
An extended algebraic reconstruction technique (E-ART) for dual spectral CT.
Zhao, Yunsong; Zhao, Xing; Zhang, Peng
2015-03-01
Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.
Jiang, Shanghai
2017-01-01
X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (A and B) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom A contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom B contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms A and B are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images. PMID:28567054
Polychromatic wave-optics models for image-plane speckle. 2. Unresolved objects.
Van Zandt, Noah R; Spencer, Mark F; Steinbock, Michael J; Anderson, Brian M; Hyde, Milo W; Fiorino, Steven T
2018-05-20
Polychromatic laser light can reduce speckle noise in many wavefront-sensing and imaging applications. To help quantify the achievable reduction in speckle noise, this study investigates the accuracy of three polychromatic wave-optics models under the specific conditions of an unresolved object. Because existing theory assumes a well-resolved object, laboratory experiments are used to evaluate model accuracy. The three models use Monte-Carlo averaging, depth slicing, and spectral slicing, respectively, to simulate the laser-object interaction. The experiments involve spoiling the temporal coherence of laser light via a fiber-based, electro-optic modulator. After the light scatters off of the rough object, speckle statistics are measured. The Monte-Carlo method is found to be highly inaccurate, while depth-slicing error peaks at 7.8% but is generally much lower in comparison. The spectral-slicing method is the most accurate, always producing results within the error bounds of the experiment.
Jones, Bernard L; Cho, Sang Hyun
2011-06-21
A recent study investigated the feasibility to develop a bench-top x-ray fluorescence computed tomography (XFCT) system capable of determining the spatial distribution and concentration of gold nanoparticles (GNPs) in vivo using a diagnostic energy range polychromatic (i.e. 110 kVp) pencil-beam source. In this follow-up study, we examined the feasibility of a polychromatic cone-beam implementation of XFCT by Monte Carlo (MC) simulations using the MCNP5 code. In the current MC model, cylindrical columns with various sizes (5-10 mm in diameter) containing water loaded with GNPs (0.1-2% gold by weight) were inserted into a 5 cm diameter cylindrical polymethyl methacrylate phantom. The phantom was then irradiated by a lead-filtered 110 kVp x-ray source, and the resulting gold fluorescence and Compton-scattered photons were collected by a series of energy-sensitive tallies after passing through lead parallel-hole collimators. A maximum-likelihood iterative reconstruction algorithm was implemented to reconstruct the image of GNP-loaded objects within the phantom. The effects of attenuation of both the primary beam through the phantom and the gold fluorescence photons en route to the detector were corrected during the image reconstruction. Accurate images of the GNP-containing phantom were successfully reconstructed for three different phantom configurations, with both spatial distribution and relative concentration of GNPs well identified. The pixel intensity of regions containing GNPs was linearly proportional to the gold concentration. The current MC study strongly suggests the possibility of developing a bench-top, polychromatic, cone-beam XFCT system for in vivo imaging.
Wang, Yang; Qian, Bangping; Li, Baoxin; Qin, Guochu; Zhou, Zhengyang; Qiu, Yong; Sun, Xizhao; Zhu, Bin
2013-08-01
To evaluate the effectiveness of spectral CT in reducing metal artifacts caused by pedicle screws in patients with scoliosis. Institutional review committee approval and written informed consents from patients were obtained. 18 scoliotic patients with a total of 228 pedicle screws who underwent spectral CT imaging were included in this study. Monochromatic image sets with and without the additional metal artifacts reduction software (MARS) correction were generated with photon energy at 65keV and from 70 to 140keV with 10keV interval using the 80kVp and 140kVp projection sets. Polychromatic images corresponded to the conventional 140kVp imaging were also generated from the same scan data as a control group. Both objective evaluation (screw width and quantitative artifacts index measurements) and subjective evaluation (depiction of pedicle screws, surrounding structures and their relationship) were performed. Image quality of monochromatic images in the range from 110 to 140keV (0.97±0.28) was rated superior to the conventional polychromatic images (2.53±0.54) and also better than monochromatic images with lower energy. Images of energy above 100keV also give accurate measurement of the width of screws and relatively low artifacts index. The form of screws was slightly distorted in MARS reconstruction. Compared to conventional polychromatic images, monochromatic images acquired from dual-energy CT provided superior image quality with much reduced metal artifacts of pedicle screws in patients with scoliosis. Optimal energy range was found between 110 and 140keV. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
Radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements
NASA Astrophysics Data System (ADS)
Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego
2018-07-01
In this paper we analyze the accuracy and efficiency of several radiative transfer models for inferring cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR). The radiative transfer models are the exact discrete ordinate and matrix operator methods with matrix exponential, and the approximate asymptotic and equivalent Lambertian cloud models. To deal with the computationally expensive radiative transfer calculations, several acceleration techniques such as, for example, the telescoping technique, the method of false discrete ordinate, the correlated k-distribution method and the principal component analysis (PCA) are used. We found that, for the EPIC oxygen A-band absorption channel at 764 nm, the exact models using the correlated k-distribution in conjunction with PCA yield an accuracy better than 1.5% and a computation time of 18 s for radiance calculations at 5 viewing zenith angles.
Reynoso, Exequiel; Capunay, Carlos; Rasumoff, Alejandro; Vallejos, Javier; Carpio, Jimena; Lago, Karen; Carrascosa, Patricia
2016-01-01
The aim of this study was to explore the usefulness of combined virtual monochromatic imaging and metal artifact reduction software (MARS) for the evaluation of musculoskeletal periprosthetic tissue. Measurements were performed in periprosthetic and remote regions in 80 patients using a high-definition scanner. Polychromatic images with and without MARS and virtual monochromatic images were obtained. Periprosthetic polychromatic imaging (PI) showed significant differences compared with remote areas among the 3 tissues explored (P < 0.0001). No significant differences were observed between periprosthetic and remote tissues using monochromatic imaging with MARS (P = 0.053 bone, P = 0.32 soft tissue, and P = 0.13 fat). However, such differences were significant using PI with MARS among bone (P = 0.005) and fat (P = 0.02) tissues. All periprosthetic areas were noninterpretable using PI, compared with 11 (9%) using monochromatic imaging. The combined use of virtual monochromatic imaging and MARS reduced periprosthetic artifacts, achieving attenuation levels comparable to implant-free tissue.
Active polarization descattering.
Treibitz, Tali; Schechner, Yoav Y
2009-03-01
Vision in scattering media is important but challenging. Images suffer from poor visibility due to backscattering and attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome, while natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are taken, with different states of the analyzer or polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and illumination falloff. Thus, the limits and noise sensitivity are analyzed. We demonstrate the approach in underwater field experiments.
NASA Astrophysics Data System (ADS)
Xu, Xiaoguang; Wang, Jun; Wang, Yi; Zeng, Jing; Torres, Omar; Yang, Yuekui; Marshak, Alexander; Reid, Jeffrey; Miller, Steve
2017-07-01
We presented an algorithm for inferring aerosol layer height (ALH) and optical depth (AOD) over ocean surface from radiances in oxygen A and B bands measured by the Earth Polychromatic Imaging Camera (EPIC) on the Deep Space Climate Observatory (DSCOVR) orbiting at Lagrangian-1 point. The algorithm was applied to EPIC imagery of a 2 day dust outbreak over the North Atlantic Ocean. Retrieved ALHs and AODs were evaluated against counterparts observed by Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), Moderate Resolution Imaging Spectroradiometer, and Aerosol Robotic Network. The comparisons showed 71.5% of EPIC-retrieved ALHs were within ±0.5 km of those determined from CALIOP and 74.4% of EPIC AOD retrievals fell within a ± (0.1 + 10%) envelope of MODIS retrievals. This study demonstrates the potential of EPIC measurements for retrieving global aerosol height multiple times daily, which are essential for evaluating aerosol profile simulated in climate models and for better estimating aerosol radiative effects.
Multispectral Wavefronts Retrieval in Digital Holographic Three-Dimensional Imaging Spectrometry
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2010-04-01
This paper deals with a recently developed passive interferometric technique for retrieving a set of spectral components of wavefronts that are propagating from a spatially incoherent, polychromatic object. The technique is based on measurement of 5-D spatial coherence function using a suitably designed interferometer. By applying signal processing, including aperture synthesis and spectral decomposition, one may obtains a set of wavefronts of different spectral bands. Since each wavefront is equivalent to the complex Fresnel hologram at a particular spectrum of the polychromatic object, application of the conventional Fresnel transform yields 3-D image of different spectrum. Thus, this technique of multispectral wavefronts retrieval provides a new type of 3-D imaging spectrometry based on a fully passive interferometry. Experimental results are also shown to demonstrate the validity of the method.
Luminescent screen composition and apparatus
NASA Technical Reports Server (NTRS)
Hilborn, E. H.
1970-01-01
Ultraviolet light projects photographically produced images on a screen composed of a mixture of linear and nonlinear phosphors whose spectral emissions are different. This allows the display of polychromatic luminescent images, which gives better discrimination of the objects being viewed.
Nakashima, Yoshito; Nakano, Tsukasa
2014-01-01
Iodine is commonly used as a contrast agent in nonmedical science and engineering, for example, to visualize Darcy flow in porous geological media using X-ray computed tomography (CT). Undesirable beam hardening artifacts occur when a polychromatic X-ray source is used, which makes the quantitative analysis of CT images difficult. To optimize the chemistry of a contrast agent in terms of the beam hardening reduction, we performed computer simulations and generated synthetic CT images of a homogeneous cylindrical sand-pack (diameter, 28 or 56 mm; porosity, 39 vol.% saturated with aqueous suspensions of heavy elements assuming the use of a polychromatic medical CT scanner. The degree of cupping derived from the beam hardening was assessed using the reconstructed CT images to find the chemistry of the suspension that induced the least cupping. The results showed that (i) the degree of cupping depended on the position of the K absorption edge of the heavy element relative to peak of the polychromatic incident X-ray spectrum, (ii) (53)I was not an ideal contrast agent because it causes marked cupping, and (iii) a single element much heavier than (53)I ((64)Gd to (79)Au) reduced the cupping artifact significantly, and a four-heavy-element mixture of elements from (64)Gd to (79)Au reduced the artifact most significantly.
Digital holographic 3D imaging spectrometry (a review)
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2017-09-01
This paper reviews recent progress in the digital holographic 3D imaging spectrometry. The principle of this method is a marriage of incoherent holography and Fourier transform spectroscopy. Review includes principle, procedure of signal processing and experimental results to obtain a multispectral set of 3D images for spatially incoherent, polychromatic objects.
Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements
NASA Astrophysics Data System (ADS)
Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego
2018-07-01
In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-10-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95% of the globe.
NASA Technical Reports Server (NTRS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-01-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-kilometer resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95 percent of the globe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less
Mao, Shan; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong
2016-05-01
A mathematical model of diffraction efficiency and polychromatic integral diffraction efficiency affected by environment temperature change and incident angle for three-layer diffractive optics with different dispersion materials is put forward, and its effects are analyzed. Taking optical materials N-FK5 and N-SF1 as the substrates of multilayer diffractive optics, the effect on diffraction efficiency and polychromatic integral diffraction efficiency with intermediate materials POLYCARB is analyzed with environment temperature change as well as incident angle. Therefore, three-layer diffractive optics can be applied in more wide environmental temperature ranges and larger incident angles for refractive-diffractive hybrid optical systems, which can obtain better image quality. Analysis results can be used to guide the hybrid imaging optical system design for optical engineers.
Energy discriminating x-ray camera utilizing a cadmium telluride detector
NASA Astrophysics Data System (ADS)
Sato, Eiichi; Purkhet, Abderyim; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Wantanabe, Manabu; Nagao, Jiro; Nomiya, Seiichiro; Hitomi, Keitaro; Tanaka, Etsuro; Kawai, Toshiaki; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2009-07-01
An energy-discriminating x-ray camera is useful for performing monochromatic radiography using polychromatic x rays. This x-ray camera was developed to carry out K-edge radiography using iodine-based contrast media. In this camera, objects are exposed by a cone beam from a cerium x-ray generator, and penetrating x-ray photons are detected by a cadmium telluride detector with an amplifier unit. The optimal x-ray photon energy and the energy width are selected out using a multichannel analyzer, and the photon number is counted by a counter card. Radiography was performed by the detector scanning using an x-y stage driven by a two-stage controller, and radiograms obtained by energy discriminating are shown on a personal computer monitor. In radiography, the tube voltage and current were 60 kV and 36 μA, respectively, and the x-ray intensity was 4.7 μGy/s. Cerium K-series characteristic x rays are absorbed effectively by iodine-based contrast media, and iodine K-edge radiography was performed using x rays with energies just beyond iodine K-edge energy 33.2 keV.
Dual-energy computed tomography for the detection of focal liver lesions.
Lago, K N; Vallejos, J; Capuñay, C; Salas, E; Reynoso, E; Carpio, J B; Carrascosa, P M
To qualitatively and quantitatively explore the spectral study of focal liver lesions, comparing it with the usual polychromatic assessment with single-energy computed tomography. We prospectively studied 50 patients with at least one focal liver lesion who were referred for abdominal multidetector computed tomography with intravenous contrast material. The portal phase was acquired with dual energy sources. The density of the lesions and of the surrounding liver parenchyma was measured both in the baseline polychromatic acquisition and in the posterior monochromatic reconstructions at 40 keV, 70 keV, and 140 keV. Spectral curves were traced and the dual-energy indices and contrast-to-noise ratio were calculated. Lastly, the quality of the images and the detectability of the lesions were assessed qualitatively. Densitometric differences between the different types of lesions (avascular and vascularized) and the liver were greater at low energy levels (left side of the spectral curve) than in the polychromatic evaluation. In the subjective assessment, the 40keV energy level had the greatest lesion detectability. Monochromatic spectral study with dual-energy computed tomography provides better lesion detectability at 40keV compared to that provided by the ordinary polychromatic evaluation. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
MASS ESTIMATES OF RAPIDLY MOVING PROMINENCE MATERIAL FROM HIGH-CADENCE EUV IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, David R.; Baker, Deborah; Van Driel-Gesztelyi, Lidia, E-mail: d.r.williams@ucl.ac.uk
We present a new method for determining the column density of erupting filament material using state-of-the-art multi-wavelength imaging data. Much of the prior work on filament/prominence structure can be divided between studies that use a polychromatic approach with targeted campaign observations and those that use synoptic observations, frequently in only one or two wavelengths. The superior time resolution, sensitivity, and near-synchronicity of data from the Solar Dynamics Observatory's Advanced Imaging Assembly allow us to combine these two techniques using photoionization continuum opacity to determine the spatial distribution of hydrogen in filament material. We apply the combined techniques to SDO/AIA observationsmore » of a filament that erupted during the spectacular coronal mass ejection on 2011 June 7. The resulting 'polychromatic opacity imaging' method offers a powerful way to track partially ionized gas as it erupts through the solar atmosphere on a regular basis, without the need for coordinated observations, thereby readily offering regular, realistic mass-distribution estimates for models of these erupting structures.« less
Jung, Seongmoon; Sung, Wonmo; Ye, Sung-Joon
2017-01-01
This work aims to develop a Monte Carlo (MC) model for pinhole K-shell X-ray fluorescence (XRF) imaging of metal nanoparticles using polychromatic X-rays. The MC model consisted of two-dimensional (2D) position-sensitive detectors and fan-beam X-rays used to stimulate the emission of XRF photons from gadolinium (Gd) or gold (Au) nanoparticles. Four cylindrical columns containing different concentrations of nanoparticles ranging from 0.01% to 0.09% by weight (wt%) were placed in a 5 cm diameter cylindrical water phantom. The images of the columns had detectable contrast-to-noise ratios (CNRs) of 5.7 and 4.3 for 0.01 wt% Gd and for 0.03 wt% Au, respectively. Higher concentrations of nanoparticles yielded higher CNR. For 1×1011 incident particles, the radiation dose to the phantom was 19.9 mGy for 110 kVp X-rays (Gd imaging) and 26.1 mGy for 140 kVp X-rays (Au imaging). The MC model of a pinhole XRF can acquire direct 2D slice images of the object without image reconstruction. The MC model demonstrated that the pinhole XRF imaging system could be a potential bioimaging modality for nanomedicine. PMID:28860750
Camera System MTF: combining optic with detector
NASA Astrophysics Data System (ADS)
Andersen, Torben B.; Granger, Zachary A.
2017-08-01
MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.
Holographic imaging with a Shack-Hartmann wavefront sensor.
Gong, Hai; Soloviev, Oleg; Wilding, Dean; Pozzi, Paolo; Verhaegen, Michel; Vdovin, Gleb
2016-06-27
A high-resolution Shack-Hartmann wavefront sensor has been used for coherent holographic imaging, by computer reconstruction and propagation of the complex field in a lensless imaging setup. The resolution of the images obtained with the experimental data is in a good agreement with the diffraction theory. Although a proper calibration with a reference beam improves the image quality, the method has a potential for reference-less holographic imaging with spatially coherent monochromatic and narrowband polychromatic sources in microscopy and imaging through turbulence.
The LCLS variable-energy hard X-ray single-shot spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, David; Zhu, Diling; Turner, James
2016-01-01
The engineering design, implementation, operation and performance of the new variable-energy hard X-ray single-shot spectrometer (HXSSS) for the LCLS free-electron laser (FEL) are reported. The HXSSS system is based on a cylindrically bent Si thin crystal for dispersing the incident polychromatic FEL beam. A spatially resolved detector system consisting of a Ce:YAG X-ray scintillator screen, an optical imaging system and a low-noise pixelated optical camera is used to record the spectrograph. The HXSSS provides single-shot spectrum measurements for users whose experiments depend critically on the knowledge of the self-amplified spontaneous emission FEL spectrum. It also helps accelerator physicists for themore » continuing studies and optimization of self-seeding, various improved mechanisms for lasing mechanisms, and FEL performance improvements. The designed operating energy range of the HXSSS is from 4 to 20 keV, with the spectral range of order larger than 2% and a spectral resolution of 2 × 10 -5or better. Those performance goals have all been achieved during the commissioning of the HXSSS.« less
The LCLS variable-energy hard X-ray single-shot spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, David; Zhu, Diling; Turner, James
The engineering design, implementation, operation and performance of the new variable-energy hard X-ray single-shot spectrometer (HXSSS) for the LCLS free-electron laser (FEL) are reported. The HXSSS system is based on a cylindrically bent Si thin crystal for dispersing the incident polychromatic FEL beam. A spatially resolved detector system consisting of a Ce:YAG X-ray scintillator screen, an optical imaging system and a low-noise pixelated optical camera is used to record the spectrograph. The HXSSS provides single-shot spectrum measurements for users whose experiments depend critically on the knowledge of the self-amplified spontaneous emission FEL spectrum. It also helps accelerator physicists for themore » continuing studies and optimization of self-seeding, various improved mechanisms for lasing mechanisms, and FEL performance improvements. The designed operating energy range of the HXSSS is from 4 to 20 keV, with the spectral range of order larger than 2% and a spectral resolution of 2 × 10 -5or better. Those performance goals have all been achieved during the commissioning of the HXSSS.« less
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Knyazikhin, Yuri
2017-01-01
EPIC (Earth Polychromatic Imaging Camera) is a 10-channel spectroradiometer onboard DSCOVR (Deep Space Climate Observatory) spacecraft. In addition to the near-infrared (NIR, 780 nm) and the 'red' (680 nm) channels, EPIC also has the O2 A-band (764+/-0.2 nm) and B-band (687.75+/-0.2 nm). The EPIC Normalized Difference Vegetation Index (NDVI) is defined as the difference between NIR and 'red' channels normalized to their sum. However, the use of the O2 B-band instead of the 'red' channel mitigates the effect of atmosphere on remote sensing of surface reflectance because O2 reduces contribution from the radiation scattered by the atmosphere. Applying the radiative transfer theory and the spectral invariant approximation to EPIC observations, the paper provides supportive arguments for using the O2 band instead of the red channel for monitoring vegetation dynamics. Our results suggest that the use of the O2 B-band enhances the sensitivity of the top-of-atmosphere NDVI to the presence of vegetation.
NASA Astrophysics Data System (ADS)
Hermus, James; Szczykutowicz, Timothy P.; Strother, Charles M.; Mistretta, Charles
2014-03-01
When performing Computed Tomographic (CT) image reconstruction on digital subtraction angiography (DSA) projections, loss of vessel contrast has been observed behind highly attenuating anatomy, such as dental implants and large contrast filled aneurysms. Because this typically occurs only in a limited range of projection angles, the observed contrast time course can potentially be altered. In this work, we have developed a model for acquiring DSA projections that models both the polychromatic nature of the x-ray spectrum and the x-ray scattering interactions to investigate this problem. In our simulation framework, scatter and beam hardening contributions to vessel dropout can be analyzed separately. We constructed digital phantoms with large clearly defined regions containing iodine contrast, bone, soft issue, titanium (dental implants) or combinations of these materials. As the regions containing the materials were large and rectangular, when the phantoms were forward projected, the projections contained uniform regions of interest (ROI) and enabled accurate vessel dropout analysis. Two phantom models were used, one to model the case of a vessel behind a large contrast filled aneurysm and the other to model a vessel behind a dental implant. Cases in which both beam hardening and scatter were turned off, only scatter was turned on, only beam hardening was turned on, and both scatter and beam hardening were turned on, were simulated for both phantom models. The analysis of this data showed that the contrast degradation is primarily due to scatter. When analyzing the aneurysm case, 90.25% of the vessel contrast was lost in the polychromatic scatter image, however only 50.5% of the vessel contrast was lost in the beam hardening only image. When analyzing the teeth case, 44.2% of the vessel contrast was lost in the polychromatic scatter image and only 26.2% of the vessel contrast was lost in the beam hardening only image.
DSCOVR EPIC L2 VESDR V1 Product Announcement
Atmospheric Science Data Center
2018-06-13
... Boston University announce the public release of Vegetation Earth System Data Record (VESDR) derived from the Earth Polychromatic Imaging ... derived products. We also provide two ancillary science data products, namely, 10 km Land Cover Type and Distribution of ...
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Application of the polychromatic defocus transfer function to multifocal lenses.
Schwiegerling, Jim; Choi, Junoh
2008-11-01
To model the performance of multifocal lenses in polychromatic lighting. The defocus transfer function (DTF) is a mathematical technique for illustrating the optical transfer function for all levels of defocus at a given wavelength. A polychromatic version of the DTF is developed that accounts for changes in cutoff frequency, reduction in diffraction efficiency, ocular chromatic aberration, and photoreceptor spectral sensitivity. The differences between the monochromatic and polychromatic DTF are illustrated with a diffractive multifocal intraocular lens. Polychromatic analysis shows an increase in depth of field of diffractive lenses relative to assessment at a single wavelength. The polychromatic DTF is a useful tool for analyzing presbyopia treatments under "white-light" viewing conditions and provides feedback to lens designers on anticipated performance.
Reflective diffraction grating
Lamartine, Bruce C.
2003-06-24
Reflective diffraction grating. A focused ion beam (FIB) micromilling apparatus is used to store color images in a durable medium by milling away portions of the surface of the medium to produce a reflective diffraction grating with blazed pits. The images are retrieved by exposing the surface of the grating to polychromatic light from a particular incident bearing and observing the light reflected by the surface from specified reception bearing.
Shi, Hongli; Yang, Zhi; Luo, Shuqian
2017-01-01
The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.
Simulation of a compact analyzer-based imaging system with a regular x-ray source
NASA Astrophysics Data System (ADS)
Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.
2017-03-01
Analyzer-based Imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray techniques. PC measures X-ray deflection phenomena when interacting with a sample, which is known to provide higher contrast images of soft tissue than other X-ray methods. This is of high interest in the medical field, in particular for mammogram applications. This paper presents a simulation tool for table-top ABI systems using a conventional polychromatic X-ray source.
Detector-unit-dependent calibration for polychromatic projections of rock core CT.
Li, Mengfei; Zhao, Yunsong; Zhang, Peng
2017-01-01
Computed tomography (CT) plays an important role in digital rock analysis, which is a new prospective technique for oil and gas industry. But the artifacts in CT images will influence the accuracy of the digital rock model. In this study, we proposed and demonstrated a novel method to restore detector-unit-dependent functions for polychromatic projection calibration by scanning some simple shaped reference samples. As long as the attenuation coefficients of the reference samples are similar to the scanned object, the size or position is not needed to be exactly known. Both simulated and real data were used to verify the proposed method. The results showed that the new method reduced both beam hardening artifacts and ring artifacts effectively. Moreover, the method appeared to be quite robust.
A Toolbox for Imaging Stellar Surfaces
NASA Astrophysics Data System (ADS)
Young, John
2018-04-01
In this talk I will review the available algorithms for synthesis imaging at visible and infrared wavelengths, including both gray and polychromatic methods. I will explain state-of-the-art approaches to constraining the ill-posed image reconstruction problem, and selecting an appropriate regularisation function and strength of regularisation. The reconstruction biases that can follow from non-optimal choices will be discussed, including their potential impact on the physical interpretation of the results. This discussion will be illustrated with example stellar surface imaging results from real VLTI and COAST datasets.
Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements
NASA Astrophysics Data System (ADS)
Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.
2016-12-01
The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.
Estimation of Canopy Sunlit Fraction of Leaf Area from Ground-Based Measurements
NASA Astrophysics Data System (ADS)
Yang, B.; Knyazikhin, Y.; Yan, K.; Chen, C.; Park, T.; CHOI, S.; Mottus, M.; Rautiainen, M.; Stenberg, P.; Myneni, R.; Yan, L.
2015-12-01
The sunlit fraction of leaf area (SFLA) defined as the fraction of the total hemisurface leaf area illuminated by the direct solar beam is a key structural variable in many global models of climate, hydrology, biogeochemistry and ecology. SFLAI is expected to be a standard product from the Earth Polychromatic Imaging Camera (EPIC) on board the joint NOAA, NASA and US Air Force Deep Space Climate Observatory (DSCOVR) mission, which was successfully launched from Cape Canaveral, Florida on February 11, 2015. The DSCOVR EPIC sensor orbiting the Sun-Earth Lagrange L1 point provides multispectral measurements of the radiation reflected by Earth in retro-illumination directions. This poster discusses a methodology for estimating the SFLA using LAI-2000 Canopy Analyzer, which is expected to underlie the strategy for validation of the DSCOVR EPIC land surface products. LAI-2000 data collected over 18 coniferous and broadleaf sites in Hyytiälä, Central Finland, were used to estimate the SFLA. Field data on canopy geometry were used to simulate selected sites. Their SFLAI was calculated using a Monte Carlo (MC) technique. LAI-2000 estimates of SFLA showed a very good agreement with MC results, suggesting validity of the proposed approach.
NASA Astrophysics Data System (ADS)
Xiao, Xianghui; Fusseis, Florian; De Carlo, Francesco
2012-10-01
State-of-art synchrotron radiation based micro-computed tomography provides high spatial and temporal resolution. This matches the needs of many research problems in geosciences. In this letter we report the current capabilities in microtomography at sector 2BM at the Advanced Photon Source (APS) of Argonne National Laboratory. The beamline is well suited to routinely acquire three-dimensional data of excellent quality with sub-micron resolution. Fast cameras in combination with a polychromatic beam allow time-lapse experiments with temporal resolutions of down to 200 ms. Data processing utilizes quantitative phase retrieval to optimize contrast in phase contrast tomographic data. The combination of these capabilities with purpose-designed experimental cells allows for a wide range of dynamic studies on geoscientific topics, two of which are summarized here. In the near future, new experimental cells capable of simulating conditions in most geological reservoirs will be available for general use. Ultimately, these advances will be matched by a new wide-field imaging beam line, which will be constructed as part of the APS upgrade. It is expected that even faster tomography with larger field of view can be conducted at this beam line, creating new opportunities for geoscientific studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca
Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less
Experimental observation of the effect of generic singularities in polychromatic dark hollow beams.
Yadav, Bharat Kumar; Joshi, Stuti; Kandpal, Hem Chandra
2014-08-15
This Letter presents the essence of our recent experimental study on generic singularities carrying spatially partially coherent, polychromatic dark hollow beams (PDHBs). To the best of our knowledge, this is the first experimental demonstration of generic singularities-induced wavefront tearing in focused polychromatic beams.
Simulations of multi-contrast x-ray imaging using near-field speckles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre
2016-01-28
X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.
Retrieving Volcanic SO2 from the 4-UV channels on DSCOVR/EPIC
NASA Astrophysics Data System (ADS)
Fisher, B. L.; Krotkov, N. A.; Carn, S. A.; Taylor, S.; Li, C.; Bhartia, P. K.; Huang, L. K.; Haffner, D. P.
2017-12-01
Since arriving at the L1 Lagrange point in June 2015, the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) has been collecting continuous full disk images of the sunlit earth from a distance of 1.5 million km. EPIC is a 10-band spectroradiometer that that has a field of view (FoV) at the earth's surface of about 25 km, providing a unique opportunity to observe the initial appearance and evolution of SO2 plumes from volcanic eruptions at about 90 minute temporal resolution. Our algorithm uses the 317.5, 325, 340 and 388 nm UV channels on EPIC to retrieve volcanic SO2, total column ozone, Lambertian equivalent reflectivity and its spectral dependence. The MS_SO2 algorithm has been successfully applied to the data from legacy and current NASA missions (e.g., Nimbus7/TOMS, SNPP/OMPS, and Aura/OMI). The separation between ozone and SO2 is possible due differences in the cross sections at the two shortest UV channels. The images for each spectral channel are not perfectly aligned due to the earth's rotation, geo-rectification, cloud noise, exposure time and spacecraft jitter. These issues introduce additional noise, for a multi-channel inversion. In this presentation, we describe some modifications to the algorithm that attempt to account for these issues. By comparing the plume areas, mass tonnage and peak SO2 values from other low earth observing satellites, it is shown that the algorithm significantly improves the identification of the plume, while eliminating false positives.
NASA Astrophysics Data System (ADS)
Zhang, Siyuan; Li, Liang; Li, Ruizhe; Chen, Zhiqiang
2017-11-01
We present the design concept and initial simulations for a polychromatic full-field fan-beam x-ray fluorescence computed tomography (XFCT) device with pinhole collimators and linear-array photon counting detectors. The phantom is irradiated by a fan-beam polychromatic x-ray source filtered by copper. Fluorescent photons are stimulated and then collected by two linear-array photon counting detectors with pinhole collimators. The Compton scatter correction and the attenuation correction are applied in the data processing, and the maximum-likelihood expectation maximization algorithm is applied for the image reconstruction of XFCT. The physical modeling of the XFCT imaging system was described, and a set of rapid Monte Carlo simulations was carried out to examine the feasibility and sensitivity of the XFCT system. Different concentrations of gadolinium (Gd) and gold (Au) solutions were used as contrast agents in simulations. Results show that 0.04% of Gd and 0.065% of Au can be well reconstructed with the full scan time set at 6 min. Compared with using the XFCT system with a pencil-beam source or a single-pixel detector, using a full-field fan-beam XFCT device with linear-array detectors results in significant scanning time reduction and may satisfy requirements of rapid imaging, such as in vivo imaging experiments.
GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.
E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N
2018-03-01
GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.
The value of spectral imaging to reduce artefacts in the body after 125 I seed implantation.
Liu, Jingang; Wang, Wenjuan; Zhao, Xingsheng; Shen, Zhen; Shao, Weiguang; Wang, Xizhen; Li, Lixin; Wang, Bin
2016-10-01
To explore the value of gemstone spectral imaging (GSI) and metal artefact reduction sequence (MARs) to reduce the artefacts of metal seeds. Thirty-five patients with 125 I seed implantation in their abdomens underwent GSI CT. Six types of monochromatic images and the corresponding MARs images at 60-110 keV (interval of 10 keV) were reconstructed. The differences in the quality of the images of three imaging methods were subjectively assessed by three radiologists. Length of artefacts, the CT value and noise value of tissue adjacent to 125 I seeds, contrast-to-noise ratio (CNR), and artefact index (AI) were recorded. The differences in subjective scoring were statistically significant (t = 10.87, P < 0.001). Images at 70 keV showed the best CNR (0.84 ± 0.17) of tissues adjacent to 125 I seeds, and received the highest subjective score (2.82 ± 0.18). Images at 80 keV had the lowest AI (70.67 ± 19.17). Images at 110 keV had the shortest artefact lengths. High-density metal artefacts in the MARs spectral images were reduced. The length of metal artefacts in images at 110 keV was shorter than that of the polychromatic images and MARs spectral images (t = 3.35, 3.89, P < 0.05). The difference in CNR between MARs spectral images and polychromatic images, and images at 70 keV was statistically significant (t = 3.57, 4.16, P < 0.01). Gemstone spectral imaging technique can reduce metal artefacts of 125 I seeds effectively in CT images, and improve the quality of images, and improve the display of tissues adjacent to 125 I seeds after implantation. MARs technique cannot reduce the artefacts caused by radioactive seeds effectively. © 2016 The Royal Australian and New Zealand College of Radiologists.
Polychromatic spectral pattern analysis of ultra-weak photon emissions from a human body.
Kobayashi, Masaki; Iwasa, Torai; Tada, Mika
2016-06-01
Ultra-weak photon emission (UPE), often designated as biophoton emission, is generally observed in a wide range of living organisms, including human beings. This phenomenon is closely associated with reactive oxygen species (ROS) generated during normal metabolic processes and pathological states induced by oxidative stress. Application of UPE extracting the pathophysiological information has long been anticipated because of its potential non-invasiveness, facilitating its diagnostic use. Nevertheless, its weak intensity and UPE mechanism complexity hinder its use for practical applications. Spectroscopy is crucially important for UPE analysis. However, filter-type spectroscopy technique, used as a conventional method for UPE analysis, intrinsically limits its performance because of its monochromatic scheme. To overcome the shortcomings of conventional methods, the authors developed a polychromatic spectroscopy system for UPE spectral pattern analysis. It is based on a highly efficient lens systems and a transmission-type diffraction grating with a highly sensitive, cooled, charge-coupled-device (CCD) camera. Spectral pattern analysis of the human body was done for a fingertip using the developed system. The UPE spectrum covers the spectral range of 450-750nm, with a dominant emission region of 570-670nm. The primary peak is located in the 600-650nm region. Furthermore, application of UPE source exploration was demonstrated with the chemiluminescence spectrum of melanin and coexistence with oxidized linoleic acid. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Herman, J. R.; Boccara, M.; Albers, S. C.
2017-12-01
The Earth Polychromatic Imaging Camera (EPIC) onboard the DSCOVR satellite continuously views the sun-illuminated portion of the Earth with spectral coverage in the visible band, among others. Ideally, such a system would be able to provide a video with continuous coverage up to real time. However due to limits in onboard storage, bandwidth, and antenna coverage on the ground, we can receive at most 20 images a day, separated by at least one hour. Also, the processing time to generate the visible image out of the separate RGB channels delays public images delivery by a day or two. Finally, occasional remote tuning of instruments can cause several day periods where the imagery is completely missing. We are proposing a model-based method to fill these gaps and restore images lost in real-time processing. We are combining two sets of algorithms. The first, called Blueturn, interpolates successive images while projecting them on a 3-D model of the Earth, all this being done in real-time using the GPU. The second, called Simulated Weather Imagery (SWIM), makes EPIC-like images utilizing a ray-tracing model of scattering and absorption of sunlight by clouds, atmospheric gases, aerosols, and land surface. Clouds are obtained from 3-D gridded analyses and forecasts using weather modeling systems such as the Local Analysis and Prediction System (LAPS), and the Flow-following finite-volume Finite Icosahedral Model (FIM). SWIM uses EPIC images to validate its models. Typical model grid spacing is about 20km and is roughly commensurate with the EPIC imagery. Calculating one image per hour is enough for Blueturn to generate a smooth video. The synthetic images are designed to be visually realistic and aspire to be indistinguishable from the real ones. Resulting interframe transitions become seamless, and real-time delay is reduced to 1 hour. With Blueturn already available as a free online app, streaming EPIC images directly from NASA's public website, and with another SWIM server to ensure constant interval between key images, this work brings transcendance to EPIC's tribute. Enriched by two years of actual service in space, the most real holistic view of the Earth will be continued at a high degree of fidelity, regardless of EPIC limitations or interruptions.
NASA Astrophysics Data System (ADS)
Khlopenkov, K. V.; Duda, D. P.; Thieman, M. M.; Sun-Mack, S.; Su, W.; Minnis, P.; Bedka, K. M.
2017-12-01
The Deep Space Climate Observatory (DSCOVR) is designed to study the daytime Earth radiation budget by means of onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). EPIC imager observes in several shortwave bands (317-780 nm), while NISTAR measures the top-of-atmosphere (TOA) whole-disk radiance in shortwave and total broadband windows. Calculation of albedo and outgoing longwave flux requires a high-resolution scene identification such as the radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers. These properties have to be co-located with EPIC imager pixels to provide scene identification and to select anisotropic directional models, which are then used to adjust the NISTAR-measured radiance and subsequently obtain the global daytime shortwave and longwave fluxes. This work presents an algorithm for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. The highest quality observation is selected by means of an aggregated rating which incorporates several factors such as the nearest time relative to EPIC observation, lowest viewing zenith angle, and others. This process provides a smoother transition and avoids abrupt changes in the merged composite data. Higher spatial accuracy in the composite product is achieved by using the inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into the EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. Within every EPIC footprint, the PSF-weighted average radiances and cloud properties are computed for each cloud phase and then stored within five data subsets (clear-sky, water cloud, ice cloud, total cloud, and no retrieval). Overall, the composite product has been generated for every EPIC observation from June 2015 to December 2016, typically 300-500 composites per month, which makes it useful for many climate applications.
Near-Field Diffraction Imaging from Multiple Detection Planes
NASA Astrophysics Data System (ADS)
Loetgering, L.; Golembusch, M.; Hammoud, R.; Wilhein, T.
2017-06-01
We present diffraction imaging results obtained from multiple near-field diffraction constraints. An iterative phase retrieval algorithm was implemented that uses data redundancy achieved by measuring near-field diffraction intensities at various sample-detector distances. The procedure allows for reconstructing the exit surface wave of a sample within a multiple constraint satisfaction framework neither making use of a priori knowledge as enforced in coherent diffraction imaging (CDI) nor exact scanning grid knowledge as required in ptychography. We also investigate the potential of the presented technique to deal with polychromatic radiation as important for potential application in diffraction imaging by means of tabletop EUV and X-ray sources.
Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
2000-01-01
This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.
Propagation of the Lissajous singularity dipole emergent from non-paraxial polychromatic beams
NASA Astrophysics Data System (ADS)
Haitao, Chen; Gao, Zenghui; Wang, Wanqing
2017-06-01
The propagation of the Lissajous singularity dipole (LSD) emergent from the non-paraxial polychromatic beams is studied. It is found that the handedness reversal of Lissajous singularities, the change in the shape of Lissajous figures, as well as the creation and annihilation of the LSD may take place by varying the propagation distance, off-axis parameter, wavelength, or amplitude factor. Comparing with the LSD emergent from paraxial polychromatic beams, the output field of non-paraxial polychromatic beams is more complicated, which results in some richer dynamic behaviors of Lissajous singularities, such as more Lissajous singularities and no vanishing of a single Lissajous singularity at the plane z>0.
NASA Technical Reports Server (NTRS)
Hochberg, Eric B. (Inventor); Baroth, Edmund C. (Inventor)
1994-01-01
An novel interferometric apparatus and method for measuring the topography of aspheric surfaces, without requiring any form of scanning or phase shifting. The apparatus and method of the present invention utilize a white-light interferometer, such as a white-light Twyman-Green interferometer, combined with a means for dispersing a polychromatic interference pattern, using a fiber-optic bundle and a disperser such as a prism for determining the monochromatic spectral intensities of the polychromatic interference pattern which intensities uniquely define the optical path differences or OPD between the surface under test and a reference surface such as a reference sphere. Consequently, the present invention comprises a snapshot approach to measuring aspheric surface topographies such as the human cornea, thereby obviating vibration sensitive scanning which would otherwise reduce the accuracy of the measurement. The invention utilizes a polychromatic interference pattern in the pupil image plane, which is dispersed on a point-wise basis, by using a special area-to-line fiber-optic manifold, onto a CCD or other type detector comprising a plurality of columns of pixels. Each such column is dedicated to a single point of the fringe pattern for enabling determination of the spectral content of the pattern. The auto-correlation of the dispersed spectrum of the fringe pattern is uniquely characteristic of a particular optical path difference between the surface under test and a reference surface.
NASA Astrophysics Data System (ADS)
Watanabe, Manabu; Sato, Eiichi; Abderyim, Purkhet; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Nagao, Jiro; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2011-05-01
Energy-discrimination X-ray camera is useful to perform monochromatic radiography using polychromatic X-rays. This X-ray camera was developed to carry out K-edge radiography using cerium and gadolinium-based contrast media. In this camera, objects are irradiated by a cone beam from a tungsten-target X-ray generator, and penetrating X-ray photons are detected by a cadmium-telluride detector with amplifiers. Both optimal photon-energy level and energy width are selected using a multichannel analyzer, and the photon number is counted by a counter card. Radiography was performed by the detector scanning using an x- y stage driven by a two-stage controller, and radiograms were shown on a personal computer monitor. In radiography, tube voltage and current were 90 kV and 5.8 μA, respectively, and the X-ray intensity was 0.61 μGy/s at 1.0 m from the X-ray source. The K-edge energies of cerium and gadolinium are 40.3 and 50.3 keV, respectively, and 10 keV-width enhanced K-edge radiography was performed using X-ray photons with energies just beyond K-edge energies of cerium and gadolinium. Thus, cerium K-edge radiography was carried out using X-ray photons with an energy range from 40.3 to 50. 3 keV, and gadolinium K-edge radiography was accomplished utilizing photon energies ranging from 50.3 to 60.3 keV.
NASA Astrophysics Data System (ADS)
Herman, Jay; Huang, Liang; McPeters, Richard; Ziemke, Jerry; Cede, Alexander; Blank, Karin
2018-01-01
EPIC (Earth Polychromatic Imaging Camera) on board the DSCOVR (Deep Space Climate Observatory) spacecraft is the first earth science instrument located near the earth-sun gravitational plus centrifugal force balance point, Lagrange 1. EPIC measures earth-reflected radiances in 10 wavelength channels ranging from 317.5 to 779.5 nm. Of these channels, four are in the UV range 317.5, 325, 340, and 388 nm, which are used to retrieve O3, 388 nm scene reflectivity (LER: Lambert equivalent reflectivity), SO2, and aerosol properties. These new synoptic quantities are retrieved for the entire sunlit globe from sunrise to sunset multiple times per day as the earth rotates in EPIC's field of view. Retrieved ozone amounts agree with ground-based measurements and satellite data to within 3 %. The ozone amounts and LER are combined to derive the erythemal irradiance for the earth's entire sunlit surface at a nadir resolution of 18 × 18 km2 using a computationally efficient approximation to a radiative transfer calculation of irradiance. The results show very high summertime values of the UV index (UVI) in the Andes and Himalayas (greater than 18), and high values of UVI near the Equator at equinox.
A Consistent EPIC Visible Channel Calibration Using VIIRS and MODIS as a Reference.
NASA Astrophysics Data System (ADS)
Haney, C.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.
2017-12-01
The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.
A Consistent EPIC Visible Channel Calibration using VIIRS and MODIS as a Reference
NASA Technical Reports Server (NTRS)
Haney, C. O.; Doelling, D. R.; Minnis, P.; Bhatt, R.; Scarino, B. R.; Gopalan, A.
2017-01-01
The Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) satellite constantly images the sunlit disk of Earth from the Lagrange-1 (L1) point in 10 spectral channels spanning the UV, VIS, and NIR spectrums. Recently, the DSCOVR EPIC team has publicly released version 2 dataset, which has implemented improved navigation, stray-light correction, and flat-fielding of the CCD array. The EPIC 2-year data record must be well-calibrated for consistent cloud, aerosol, trace gas, land use and other retrievals. Because EPIC lacks onboard calibrators, the observations made by EPIC channels must be calibrated vicariously using the coincident measurements from radiometrically stable instruments that have onboard calibration systems. MODIS and VIIRS are best-suited instruments for this task as they contain similar spectral bands that are well-calibrated onboard using solar diffusers and lunar tracking. We have previously calibrated the EPIC version 1 dataset by using EPIC and VIIRS angularly matched radiance pairs over both all-sky ocean and deep convective clouds (DCC). We noted that the EPIC image required navigations adjustments, and that the EPIC stray-light correction provided an offset term closer to zero based on the linear regression of the EPIC and VIIRS ray-matched radiance pairs. We will evaluate the EPIC version 2 navigation and stray-light improvements using the same techniques. In addition, we will monitor the EPIC channel calibration over the two years for any temporal degradation or anomalous behavior. These two calibration methods will be further validated using desert and DCC invariant Earth targets. The radiometric characterization of the selected invariant targets is performed using multiple years of MODIS and VIIRS measurements. Results of these studies will be shown at the conference.
Heidari, A H; Shabestani Monfared, A; Mozdarani, H; Mahmoudzadeh, A; Razzaghdoust, A
2017-12-01
We intend to study the inhibitory effect of sulfur compound in Ramsar hot spring mineral on tumor-genesis ability of high natural background radiation. The radioprotective effect of sulfur compounds was previously shown on radiation-induced chromosomal aberration, micronuclei in mouse bone marrow cells and human peripheral lymphocyte. Ramsar is known for having the highest level of natural background radiation on Earth. This study was performed to show the radioprotective effect of sulfur-containing Ramsar mineral water on mouse bone marrow cells. Mice were fed three types of water (drinking water, Ramsar radioactive water containing sulfur and Ramsar radioactive water whose sulfur was removed). Ten days after feeding, mice were irradiated by gamma rays (0, 2 and 4 Gy). 48 and 72 hours after irradiating, mice were killed and femurs were removed. Frequency of micronuclei was determined in bone marrow erythrocytes. A significant reduction was shown in the rate of micronuclei polychromatic erythrocyte in sulfur-containing hot spring water compared to sulfur-free water in hot spring mineral water. Gamma irradiation induced significant increases in micronuclei polychromatic erythrocyte (MNPCE) and decreases in polychromatic erythrocyte/polychromatic erythrocyte + normochromatic erythrocyte ratio (PCEs/PCEs+NCEs) (P < 0.001) in sulfur-containing hot spring water compared to sulfur-free hot spring mineral water. Also, apparently there was a significant difference between drinking water and sulfur-containing hot spring water in micronuclei polychromatic erythrocyte and polychromatic erythrocyte/polychromatic erythrocyte+ normochromatic erythrocyte ratio. The results indicate that sulfur-containing mineral water could result in a significant reduction in radiation-induced micronuclei representing the radioprotective effect of sulfur compounds.
Polychromatic microdiffraction characterization of defect gradients in severely deformed materials.
Barabash, Rozaliya I; Ice, Gene E; Liu, Wenjun; Barabash, Oleg M
2009-01-01
This paper analyzes local lattice rotations introduced in severely deformed polycrystalline titanium by friction stir welding. Nondestructive three-dimensional (3D) spatially resolved polychromatic X-ray microdiffraction, is used to resolve the local crystal structure of the restructured surface from neighboring local structures in the sample material. The measurements reveal strong gradients of strain and geometrically necessary dislocations near the surface and illustrate the potential of polychromatic microdiffraction for the study of deformation in complex materials systems.
Dual energy CT: How well can pseudo-monochromatic imaging reduce metal artifacts?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchenbecker, Stefan, E-mail: stefan.kuchenbecker@dkfz.de; Faby, Sebastian; Sawall, Stefan
2015-02-15
Purpose: Dual Energy CT (DECT) provides so-called monoenergetic images based on a linear combination of the original polychromatic images. At certain patient-specific energy levels, corresponding to certain patient- and slice-dependent linear combination weights, e.g., E = 160 keV corresponds to α = 1.57, a significant reduction of metal artifacts may be observed. The authors aimed at analyzing the method for its artifact reduction capabilities to identify its limitations. The results are compared with raw data-based processing. Methods: Clinical DECT uses a simplified version of monochromatic imaging by linearly combining the low and the high kV images and by assigning an energymore » to that linear combination. Those pseudo-monochromatic images can be used by radiologists to obtain images with reduced metal artifacts. The authors analyzed the underlying physics and carried out a series expansion of the polychromatic attenuation equations. The resulting nonlinear terms are responsible for the artifacts, but they are not linearly related between the low and the high kV scan: A linear combination of both images cannot eliminate the nonlinearities, it can only reduce their impact. Scattered radiation yields additional noncanceling nonlinearities. This method is compared to raw data-based artifact correction methods. To quantify the artifact reduction potential of pseudo-monochromatic images, they simulated the FORBILD abdomen phantom with metal implants, and they assessed patient data sets of a clinical dual source CT system (100, 140 kV Sn) containing artifacts induced by a highly concentrated contrast agent bolus and by metal. In each case, they manually selected an optimal α and compared it to a raw data-based material decomposition in case of simulation, to raw data-based material decomposition of inconsistent rays in case of the patient data set containing contrast agent, and to the frequency split normalized metal artifact reduction in case of the metal implant. For each case, the contrast-to-noise ratio (CNR) was assessed. Results: In the simulation, the pseudo-monochromatic images yielded acceptable artifact reduction results. However, the CNR in the artifact-reduced images was more than 60% lower than in the original polychromatic images. In contrast, the raw data-based material decomposition did not significantly reduce the CNR in the virtual monochromatic images. Regarding the patient data with beam hardening artifacts and with metal artifacts from small implants the pseudo-monochromatic method was able to reduce the artifacts, again with the downside of a significant CNR reduction. More intense metal artifacts, e.g., as those caused by an artificial hip joint, could not be suppressed. Conclusions: Pseudo-monochromatic imaging is able to reduce beam hardening, scatter, and metal artifacts in some cases but it cannot remove them. In all cases, the CNR is significantly reduced, thereby rendering the method questionable, unless special post processing algorithms are implemented to restore the high CNR from the original images (e.g., by using a frequency split technique). Raw data-based dual energy decomposition methods should be preferred, in particular, because the CNR penalty is almost negligible.« less
Haneder, Stefan; Siedek, Florian; Doerner, Jonas; Pahn, Gregor; Grosse Hokamp, Nils; Maintz, David; Wybranski, Christian
2018-01-01
Background A novel, multi-energy, dual-layer spectral detector computed tomography (SDCT) is commercially available now with the vendor's claim that it yields the same or better quality of polychromatic, conventional CT images like modern single-energy CT scanners without any radiation dose penalty. Purpose To intra-individually compare the quality of conventional polychromatic CT images acquired with a dual-layer spectral detector (SDCT) and the latest generation 128-row single-energy-detector (CT128) from the same manufacturer. Material and Methods Fifty patients underwent portal-venous phase, thoracic-abdominal CT scans with the SDCT and prior CT128 imaging. The SDCT scanning protocol was adapted to yield a similar estimated dose length product (DLP) as the CT128. Patient dose optimization by automatic tube current modulation and CT image reconstruction with a state-of-the-art iterative algorithm were identical on both scanners. CT image contrast-to-noise ratio (CNR) was compared between the SDCT and CT128 in different anatomic structures. Image quality and noise were assessed independently by two readers with 5-point-Likert-scales. Volume CT dose index (CTDI vol ), and DLP were recorded and normalized to 68 cm acquisition length (DLP 68 ). Results The SDCT yielded higher mean CNR values of 30.0% ± 2.0% (26.4-32.5%) in all anatomic structures ( P < 0.001) and excellent scores for qualitative parameters surpassing the CT128 (all P < 0.0001) with substantial inter-rater agreement (κ ≥ 0.801). Despite adapted scan protocols the SDCT yielded lower values for CTDI vol (-10.1 ± 12.8%), DLP (-13.1 ± 13.9%), and DLP 68 (-15.3 ± 16.9%) than the CT128 (all P < 0.0001). Conclusion The SDCT scanner yielded better CT image quality compared to the CT128 and lower radiation dose parameters.
Hyperchromatic laser scanning cytometry
NASA Astrophysics Data System (ADS)
Tárnok, Attila; Mittag, Anja
2007-02-01
In the emerging fields of high-content and high-throughput single cell analysis for Systems Biology and Cytomics multi- and polychromatic analysis of biological specimens has become increasingly important. Combining different technologies and staining methods polychromatic analysis (i.e. using 8 or more fluorescent colors at a time) can be pushed forward to measure anything stainable in a cell, an approach termed hyperchromatic cytometry. For cytometric cell analysis microscope based Slide Based Cytometry (SBC) technologies are ideal as, unlike flow cytometry, they are non-consumptive, i.e. the analyzed sample is fixed on the slide. Based on the feature of relocation identical cells can be subsequently reanalyzed. In this manner data on the single cell level after manipulation steps can be collected. In this overview various components for hyperchromatic cytometry are demonstrated for a SBC instrument, the Laser Scanning Cytometer (Compucyte Corp., Cambridge, MA): 1) polychromatic cytometry, 2) iterative restaining (using the same fluorochrome for restaining and subsequent reanalysis), 3) differential photobleaching (differentiating fluorochromes by their different photostability), 4) photoactivation (activating fluorescent nanoparticles or photocaged dyes), and 5) photodestruction (destruction of FRET dyes). With the intelligent combination of several of these techniques hyperchromatic cytometry allows to quantify and analyze virtually all components of relevance on the identical cell. The combination of high-throughput and high-content SBC analysis with high-resolution confocal imaging allows clear verification of phenotypically distinct subpopulations of cells with structural information. The information gained per specimen is only limited by the number of available antibodies and by sterical hindrance.
Estimation of leaf area index and its sunlit portion from DSCOVR EPIC data: Theoretical basis
Yang, Bin; Knyazikhin, Yuri; Mõttus, Matti; Rautiainen, Miina; Stenberg, Pauline; Yan, Lei; Chen, Chi; Yan, Kai; Choi, Sungho; Park, Taejin; Myneni, Ranga B.
2017-01-01
This paper presents the theoretical basis of the algorithm designed for the generation of leaf area index and diurnal course of its sunlit portion from NASA’s Earth Polychromatic Imaging Camera (EPIC) onboard NOAA’s Deep Space Climate Observatory (DSCOVR). The Look-up-Table (LUT) approach implemented in the MODIS operational LAI/FPAR algorithm is adopted. The LUT, which is the heart of the approach, has been significantly modified. First, its parameterization incorporates the canopy hot spot phenomenon and recent advances in the theory of canopy spectral invariants. This allows more accurate decoupling of the structural and radiometric components of the measured Bidirectional Reflectance Factor (BRF), improves scaling properties of the LUT and consequently simplifies adjustments of the algorithm for data spatial resolution and spectral band compositions. Second, the stochastic radiative transfer equations are used to generate the LUT for all biome types. The equations naturally account for radiative effects of the three-dimensional canopy structure on the BRF and allow for an accurate discrimination between sunlit and shaded leaf areas. Third, the LUT entries are measurable, i.e., they can be independently derived from both below canopy measurements of the transmitted and above canopy measurements of reflected radiation fields. This feature makes possible direct validation of the LUT, facilitates identification of its deficiencies and development of refinements. Analyses of field data on canopy structure and leaf optics collected at 18 sites in the Hyytiälä forest in southern boreal zone in Finland and hyperspectral images acquired by the EO-1 Hyperion sensor support the theoretical basis. PMID:28867834
Determine Daytime Earth's Radiation Budget from DSCOVR
NASA Astrophysics Data System (ADS)
Su, W.; Thieman, M. M.; Duda, D. P.; Khlopenkov, K. V.; Liang, L.; Sun-Mack, S.; Minnis, P.; SUN, M.
2017-12-01
The Deep Space Climate Observatory (DSCOVR) platform provides a unique perspective for remote sensing of the Earth. With the National Institute of Standards and Technology Advanced Radiometer (NISTAR) and the Earth Polychromatic Imaging Camera (EPIC) onboard, it provides full-disk measurements of the broadband shortwave and total radiances reaching the L1 position. Because the satellite orbits around the L1 spot, it continuously observes a nearly full Earth, providing the potential to determine the daytime radiation budget of the globe at the top of the atmosphere. The NISTAR is a single-pixel instrument that measures the broadband radiance from the entire globe, while EPIC is a spectral imager with channels in the UV and visible ranges. The Level 1 NISTAR shortwave radiances are filtered radiances. To determine the daytime TOA shortwave and longwave radiative fluxes, the NISTAR measured shortwave radiances must be unfiltered first. We will describe the algorithm used to un-filter the shortwave radiances. These unfiltered NISTAR radiances are then converted to the full disk shortwave and daytime longwave fluxes, by accounting for the anisotropic characteristics of the Earth-reflected and emitted radiances. These anisotropy factors are determined by using the scene identifications determined from multiple low Earth orbit and geostationary satellites matched into the EPIC field of view. Time series of daytime radiation budget determined from NISTAR will be presented, and methodology of estimating the fluxes from the small unlit crescent of the Earth that comprises part of the field of view will also be described. The daytime shortwave and longwave fluxes from NISTAR will be compared with CERES dataset.
X-ray microlaminography with polycapillary optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dabrowski, K. M.; Dul, D. T.; Wrobel, A.
2013-06-03
We demonstrate layer-by-layer x-ray microimaging using polycapillary optics. The depth resolution is achieved without sample or source rotation and in a way similar to classical tomography or laminography. The method takes advantage from large angular apertures of polycapillary optics and from their specific microstructure, which is treated as a coded aperture. The imaging geometry is compatible with polychromatic x-ray sources and with scanning and confocal x-ray fluorescence setups.
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
Lytro camera technology: theory, algorithms, performance analysis
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio
2013-03-01
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
NASA Astrophysics Data System (ADS)
Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.
2016-09-01
Analyzer-based X-ray phase contrast imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray imaging modalities. Unlike the conventional X-ray radiography, which measures only X-ray absorption, in PC imaging one can also measures the X-rays deflection induced by the object refractive properties. It has been shown that refraction imaging provides better contrast when imaging the soft tissue, which is of great interest in medical imaging applications. In this paper, we introduce a simulation tool specifically designed to simulate the analyzer-based X-ray phase contrast imaging system with a conventional polychromatic X-ray source. By utilizing ray tracing and basic physical principles of diffraction theory our simulation tool can predicting the X-ray beam profile shape, the energy content, the total throughput (photon count) at the detector. In addition we can evaluate imaging system point-spread function for various system configurations.
Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
NASA Technical Reports Server (NTRS)
Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Segmentation-free empirical beam hardening correction for CT.
Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc
2015-02-01
The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.
Segmentation-free empirical beam hardening correction for CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schüller, Sören; Sawall, Stefan; Stannigel, Kai
2015-02-15
Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less
MicroCT with energy-resolved photon-counting detectors
Wang, X; Meier, D; Mikkelsen, S; Maehlum, G E; Wagenaar, D J; Tsui, BMW; Patt, B E; Frey, E C
2011-01-01
The goal of this paper was to investigate the benefits that could be realistically achieved on a microCT imaging system with an energy-resolved photon-counting x-ray detector. To this end, we built and evaluated a prototype microCT system based on such a detector. The detector is based on cadmium telluride (CdTe) radiation sensors and application-specific integrated circuit (ASIC) readouts. Each detector pixel can simultaneously count x-ray photons above six energy thresholds, providing the capability for energy-selective x-ray imaging. We tested the spectroscopic performance of the system using polychromatic x-ray radiation and various filtering materials with Kabsorption edges. Tomographic images were then acquired of a cylindrical PMMA phantom containing holes filled with various materials. Results were also compared with those acquired using an intensity-integrating x-ray detector and single-energy (i.e. non-energy-selective) CT. This paper describes the functionality and performance of the system, and presents preliminary spectroscopic and tomographic results. The spectroscopic experiments showed that the energy-resolved photon-counting detector was capable of measuring energy spectra from polychromatic sources like a standard x-ray tube, and resolving absorption edges present in the energy range used for imaging. However, the spectral quality was degraded by spectral distortions resulting from degrading factors, including finite energy resolution and charge sharing. We developed a simple charge-sharing model to reproduce these distortions. The tomographic experiments showed that the availability of multiple energy thresholds in the photon-counting detector allowed us to simultaneously measure target-to-background contrasts in different energy ranges. Compared with single-energy CT with an integrating detector, this feature was especially useful to improve differentiation of materials with different attenuation coefficient energy dependences. PMID:21464527
MicroCT with energy-resolved photon-counting detectors.
Wang, X; Meier, D; Mikkelsen, S; Maehlum, G E; Wagenaar, D J; Tsui, B M W; Patt, B E; Frey, E C
2011-05-07
The goal of this paper was to investigate the benefits that could be realistically achieved on a microCT imaging system with an energy-resolved photon-counting x-ray detector. To this end, we built and evaluated a prototype microCT system based on such a detector. The detector is based on cadmium telluride (CdTe) radiation sensors and application-specific integrated circuit (ASIC) readouts. Each detector pixel can simultaneously count x-ray photons above six energy thresholds, providing the capability for energy-selective x-ray imaging. We tested the spectroscopic performance of the system using polychromatic x-ray radiation and various filtering materials with K-absorption edges. Tomographic images were then acquired of a cylindrical PMMA phantom containing holes filled with various materials. Results were also compared with those acquired using an intensity-integrating x-ray detector and single-energy (i.e. non-energy-selective) CT. This paper describes the functionality and performance of the system, and presents preliminary spectroscopic and tomographic results. The spectroscopic experiments showed that the energy-resolved photon-counting detector was capable of measuring energy spectra from polychromatic sources like a standard x-ray tube, and resolving absorption edges present in the energy range used for imaging. However, the spectral quality was degraded by spectral distortions resulting from degrading factors, including finite energy resolution and charge sharing. We developed a simple charge-sharing model to reproduce these distortions. The tomographic experiments showed that the availability of multiple energy thresholds in the photon-counting detector allowed us to simultaneously measure target-to-background contrasts in different energy ranges. Compared with single-energy CT with an integrating detector, this feature was especially useful to improve differentiation of materials with different attenuation coefficient energy dependences.
A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.
Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K
2018-04-21
Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.
Detail view of the exposed polychromatic aggregate ceiling designed and ...
Detail view of the exposed polychromatic aggregate ceiling designed and cast by John Joseph Earley for the vehicular entrance portals to the courtyard - United States Department of Justice, Constitution Avenue between Ninth & Tenth Streets, Northwest, Washington, District of Columbia, DC
Image Sensors Enhance Camera Technologies
NASA Technical Reports Server (NTRS)
2010-01-01
In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.
A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
Dark-field hyperspectral X-ray imaging
Egan, Christopher K.; Jacques, Simon D. M.; Connolley, Thomas; Wilson, Matthew D.; Veale, Matthew C.; Seller, Paul; Cernik, Robert J.
2014-01-01
In recent times, there has been a drive to develop non-destructive X-ray imaging techniques that provide chemical or physical insight. To date, these methods have generally been limited; either requiring raster scanning of pencil beams, using narrow bandwidth radiation and/or limited to small samples. We have developed a novel full-field radiographic imaging technique that enables the entire physio-chemical state of an object to be imaged in a single snapshot. The method is sensitive to emitted and scattered radiation, using a spectral imaging detector and polychromatic hard X-radiation, making it particularly useful for studying large dense samples for materials science and engineering applications. The method and its extension to three-dimensional imaging is validated with a series of test objects and demonstrated to directly image the crystallographic preferred orientation and formed precipitates across an aluminium alloy friction stir weld section. PMID:24808753
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
Methods for identification of images acquired with digital cameras
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki
2001-02-01
From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; ...
2016-07-26
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifyingin situchamber design. This approach was demonstrated with Au nanoparticles and will enable,more » for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.« less
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment.
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; Xu, Ruqing; Fuoss, Paul H; Hruszkewycz, Stephan O
2016-09-01
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifying in situ chamber design. This approach was demonstrated with Au nanoparticles and will enable, for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.
NASA Technical Reports Server (NTRS)
Geogdzhayev, Igor V.; Marshak, Alexander
2018-01-01
The unique position of the Deep Space Climate Observatory (DSCOVR) Earth Polychromatic Imaging Camera (EPIC) at the Lagrange 1 point makes an important addition to the data from currently operating low Earth orbit observing instruments. EPIC instrument does not have an onboard calibration facility. One approach to its calibration is to compare EPIC observations to the measurements from polar-orbiting radiometers. Moderate Resolution Imaging Spectroradiometer (MODIS) is a natural choice for such comparison due to its well-established calibration record and wide use in remote sensing. We use MODIS Aqua and Terra L1B 1km reflectances to infer calibration coefficients for four EPIC visible and NIR channels: 443, 551, 680 and 780 nm. MODIS and EPIC measurements made between June 2015 and 2016 are employed for comparison. We first identify favorable MODIS pixels with scattering angle matching temporarily collocated EPIC observations. Each EPIC pixel is then spatially collocated to a subset of the favorable MODIS pixels within 25 km radius. Standard deviation of the selected MODIS pixels as well as of the adjacent EPIC pixels is used to find the most homogeneous scenes. These scenes are then used to determine calibration coefficients using a linear regression between EPIC counts/sec and reflectances in the close MODIS spectral channels. We present thus inferred EPIC calibration coefficients and discuss sources of uncertainties. The lunar EPIC observations are used to calibrate EPIC O2 absorbing channels (688 and 764 nm), assuming that there is a small difference between moon reflectances separated by approx.10 nm in wavelength provided the calibration factors of the red (680 nm) and near-IR (780 nm) are known from comparison between EPIC and MODIS.
NASA Astrophysics Data System (ADS)
Aiken, John Charles
The development of a colour Spatial Light Modulator (SLM) and its application to optical information processing is described. Whilst monochrome technology has been established for many years, this is not the case for colour where commercial systems are unavailable. A main aspect of this study is therefore, how the use of colour can add an additional dimension to optical information processing. A well established route to monochrome system development has been the use of (black and white) liquid crystal televisions (LCTV) as SLM, providing useful performance at a low-cost. This study is based on the unique use of a colour display removed from a LCTV and operated as a colour SLM. A significant development has been the replacement of the original TV electronics operating the display with enhanced drive electronics specially developed for this application. Through a computer interface colour images from a drawing package or video camera can now be readily displayed on the LCD as input to an optical system. A detailed evaluation of the colour LCD optical properties, indicates that the new drive electronics have considerably improved the operation of the display for use as a colour SLM. Applications are described employing the use of colour in Fourier plane filtering, image correlation and speckle metrology. The SLM (and optical system) developed demonstrates, how the addition of colour has greatly enhanced its capabilities to implement principles of optical data processing, conventionally performed monochromatically. The hybrid combination employed, combining colour optical data processing with electronic techniques has resulted in a capable development system. Further development of the system using current colour LCDs and the move towards a portable system, is considered in the study conclusion.
Automatic calibration method for plenoptic camera
NASA Astrophysics Data System (ADS)
Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao
2016-04-01
An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.
Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted
2012-12-01
We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.
Digital camera with apparatus for authentication of images produced from an image file
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1993-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Medicine, material science and security: the versatility of the coded-aperture approach.
Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A
2014-03-06
The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.
Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology
NASA Astrophysics Data System (ADS)
Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan
2016-05-01
This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.
First results of a polychromatic artificial sodium star for the correction of tilt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, H.; Foy, R..; Tallon, M.
1996-03-06
This paper presents the first results of a joint experiment carried out at Lawrence Livermore National Laboratory during January, 1996. Laser and optical systems were tested to provide a polychromatic artificial sodium star for the correction of tilt. This paper presents the results of that experiment.
Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio
2014-11-01
We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.
SU-E-I-77: X-Ray Coherent Scatter Diffraction Pattern Modeling in GEANT4.
Kapadia, A; Samei, E; Harrawood, B; Sahbaee, P; Chawla, A; Tan, Z; Brady, D
2012-06-01
To model X-ray coherent scatter diffraction patterns in GEANT4 for simulating experiments involving material detection through diffraction pattern measurement. Although coherent scatter cross-sections are modeled accurately in GEANT4, diffraction patterns for crystalline materials are not yet included. Here we describe our modeling of crystalline diffraction patterns in GEANT4 for specific materials and the validation of the results against experimentally measured data. Coherent scatter in GEANT4 is currently based on Hubbell's non-relativistic form factor tabulations from EPDL97. We modified the form-factors by introducing an interference function that accounts for the angular dependence between the Rayleigh-scattered photons and the photon wavelength. The modified form factors were used to replace the inherent form-factors in GEANT4. The simulation was tested using monochromatic and polychromatic x-ray beams (separately) incident on objects containing one or more elements with modified form-factors. The simulation results were compared against the experimentally measured diffraction images of corresponding objects using an in-house x-ray diffraction imager for validation. The comparison was made using the following metrics: number of diffraction rings, radial distance, absolute intensity, and relative intensity. Sharp diffraction pattern rings were observed in the monochromatic simulations at locations consistent with the angular dependence of the photon wavelength. In the polychromatic simulations, the diffraction patterns exhibited a radial blur consistent with the energy spread of the polychromatic spectrum. The simulated and experimentally measured patterns showed identical numbers of rings with close agreement in radial distance, absolute and relative intensities (barring statistical fluctuations). No significant change was observed in the execution time of the simulations. This work demonstrates the ability to model coherent scatter diffraction in GEANT4 in an accurate and efficient manner without compromising the accuracy or runtime of the simulation. This work was supported by the Department of Homeland Security under grant DHS (BAA 10-01 F075), and by the Department of Defense under award W81XWH-09-1-0066. © 2012 American Association of Physicists in Medicine.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Digital Camera with Apparatus for Authentication of Images Produced from an Image File
NASA Technical Reports Server (NTRS)
Friedman, Gary L. (Inventor)
1996-01-01
A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2018-03-01
Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.
Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.
Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products
NASA Astrophysics Data System (ADS)
Williams, Don; Burns, Peter D.
2007-01-01
There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Toward an image compression algorithm for the high-resolution electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.
Object recognition through turbulence with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher
2015-03-01
Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Finite plateau in spectral gap of polychromatic constrained random networks
NASA Astrophysics Data System (ADS)
Avetisov, V.; Gorsky, A.; Nechaev, S.; Valba, O.
2017-12-01
We consider critical behavior in the ensemble of polychromatic Erdős-Rényi networks and regular random graphs, where network vertices are painted in different colors. The links can be randomly removed and added to the network subject to the condition of the vertex degree conservation. In these constrained graphs we run the Metropolis procedure, which favors the connected unicolor triads of nodes. Changing the chemical potential, μ , of such triads, for some wide region of μ , we find the formation of a finite plateau in the number of intercolor links, which exactly matches the finite plateau in the network algebraic connectivity (the value of the first nonvanishing eigenvalue of the Laplacian matrix, λ2). We claim that at the plateau the spontaneously broken Z2 symmetry is restored by the mechanism of modes collectivization in clusters of different colors. The phenomena of a finite plateau formation holds also for polychromatic networks with M ≥2 colors. The behavior of polychromatic networks is analyzed via the spectral properties of their adjacency and Laplacian matrices.
Measuring Positions of Objects using Two or More Cameras
NASA Technical Reports Server (NTRS)
Klinko, Steve; Lane, John; Nelson, Christopher
2008-01-01
An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.
Earth elevation map production and high resolution sensing camera imaging analysis
NASA Astrophysics Data System (ADS)
Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai
2010-11-01
The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).
You are here: Earth as seen from Mars
2004-03-11
This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. The inset shows a combination of four panoramic camera images zoomed in on Earth. The arrow points to Earth. Earth was too faint to be detected in images taken with the panoramic camera's color filters. http://photojournal.jpl.nasa.gov/catalog/PIA05547
The sequence measurement system of the IR camera
NASA Astrophysics Data System (ADS)
Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo
2011-08-01
Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.
Mars Descent Imager for Curiosity
2010-07-19
A pocketknife provides scale for this image of the Mars Descent Imager camera; the camera will fly on the Curiosity rover of NASA Mars Science Laboratory mission. Malin Space Science Systems, San Diego, Calif., supplied the camera for the mission.
New generation of meteorology cameras
NASA Astrophysics Data System (ADS)
Janout, Petr; Blažek, Martin; Páta, Petr
2017-12-01
A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.
Baqué, Mickael; Scalzi, Giuliano; Rabbow, Elke; Rettberg, Petra; Billi, Daniela
2013-10-01
When Chroococcidiopsis sp. strain CCMEE 057 from the Sinai Desert and strain CCMEE 029 from the Negev Desert were exposed to space and Martian simulations in the dried status as biofilms or multilayered planktonic samples, the biofilms exhibited an enhanced rate of survival. Compared to strain CCMEE 029, biofilms of strain CCME 057 better tolerated UV polychromatic radiation (5 × 10(5) kJ/m(2) attenuated with a 0.1% neutral density filter) combined with space vacuum or Martian atmosphere of 780 Pa. CCMEE 029, on the other hand, failed to survive UV polychromatic doses higher than 1.5 × 10(3) kJ/m(2). The induced damage to genomic DNA, plasma membranes and photosynthetic apparatus was quantified and visualized by means of PCR-based assays and CLSM imaging. Planktonic samples of both strains accumulated a higher amount of damage than did the biofilms after exposure to each simulation; CLSM imaging showed that photosynthetic pigment bleaching, DNA fragmentation and damaged plasma membranes occurred in the top 3-4 cell layers of both biofilms and of multilayered planktonic samples. Differences in the EPS composition were revealed by molecular probe staining as contributing to the enhanced endurance of biofilms compared to that of planktonic samples. Our results suggest that compared to strain CCMEE 029, biofilms of strain CCMEE 057 might better tolerate 1 year's exposure in space during the next EXPOSE-R2 mission.
NASA Astrophysics Data System (ADS)
Baqué, Mickael; Scalzi, Giuliano; Rabbow, Elke; Rettberg, Petra; Billi, Daniela
2013-10-01
When Chroococcidiopsis sp. strain CCMEE 057 from the Sinai Desert and strain CCMEE 029 from the Negev Desert were exposed to space and Martian simulations in the dried status as biofilms or multilayered planktonic samples, the biofilms exhibited an enhanced rate of survival. Compared to strain CCMEE 029, biofilms of strain CCME 057 better tolerated UV polychromatic radiation (5 × 105 kJ/m2 attenuated with a 0.1 % neutral density filter) combined with space vacuum or Martian atmosphere of 780 Pa. CCMEE 029, on the other hand, failed to survive UV polychromatic doses higher than 1.5 × 103 kJ/m2. The induced damage to genomic DNA, plasma membranes and photosynthetic apparatus was quantified and visualized by means of PCR-based assays and CLSM imaging. Planktonic samples of both strains accumulated a higher amount of damage than did the biofilms after exposure to each simulation; CLSM imaging showed that photosynthetic pigment bleaching, DNA fragmentation and damaged plasma membranes occurred in the top 3-4 cell layers of both biofilms and of multilayered planktonic samples. Differences in the EPS composition were revealed by molecular probe staining as contributing to the enhanced endurance of biofilms compared to that of planktonic samples. Our results suggest that compared to strain CCMEE 029, biofilms of strain CCMEE 057 might better tolerate 1 year's exposure in space during the next EXPOSE-R2 mission.
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "
NASA Astrophysics Data System (ADS)
Morison, Ian
2017-02-01
1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.
Comparison and evaluation of datasets for off-angle iris recognition
NASA Astrophysics Data System (ADS)
Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut
2016-05-01
In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.
Sub-Camera Calibration of a Penta-Camera
NASA Astrophysics Data System (ADS)
Jacobsen, K.; Gerke, M.
2016-03-01
Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.
Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera
NASA Astrophysics Data System (ADS)
He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning
2017-12-01
This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.
New opportunities for quality enhancing of images captured by passive THz camera
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2014-10-01
As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
A telephoto camera system with shooting direction control by gaze detection
NASA Astrophysics Data System (ADS)
Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro
2015-05-01
For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
High-Resolution Mars Camera Test Image of Moon Infrared
2005-09-13
This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.
2016-06-25
The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken
2005-01-01
This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Yamada, Yoshitake; Yamada, Minoru; Sugisawa, Koichi; Akita, Hirotaka; Shiomi, Eisuke; Abe, Takayuki; Okuda, Shigeo; Jinzaki, Masahiro
2015-01-01
Abstract The purpose of this study was to compare renal cyst pseudoenhancement between virtual monochromatic spectral (VMS) and conventional polychromatic 120-kVp images obtained during the same abdominal computed tomography (CT) examination and among images reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Our institutional review board approved this prospective study; each participant provided written informed consent. Thirty-one patients (19 men, 12 women; age range, 59–85 years; mean age, 73.2 ± 5.5 years) with renal cysts underwent unenhanced 120-kVp CT followed by sequential fast kVp-switching dual-energy (80/140 kVp) and 120-kVp abdominal enhanced CT in the nephrographic phase over a 10-cm scan length with a random acquisition order and 4.5-second intervals. Fifty-one renal cysts (maximal diameter, 18.0 ± 14.7 mm [range, 4–61 mm]) were identified. The CT attenuation values of the cysts as well as of the kidneys were measured on the unenhanced images, enhanced VMS images (at 70 keV) reconstructed using FBP and ASIR from dual-energy data, and enhanced 120-kVp images reconstructed using FBP, ASIR, and MBIR. The results were analyzed using the mixed-effects model and paired t test with Bonferroni correction. The attenuation increases (pseudoenhancement) of the renal cysts on the VMS images reconstructed using FBP/ASIR (least square mean, 5.0/6.0 Hounsfield units [HU]; 95% confidence interval, 2.6–7.4/3.6–8.4 HU) were significantly lower than those on the conventional 120-kVp images reconstructed using FBP/ASIR/MBIR (least square mean, 12.1/12.8/11.8 HU; 95% confidence interval, 9.8–14.5/10.4–15.1/9.4–14.2 HU) (all P < .001); on the other hand, the CT attenuation values of the kidneys on the VMS images were comparable to those on the 120-kVp images. Regardless of the reconstruction algorithm, 70-keV VMS images showed a lower degree of pseudoenhancement of renal cysts than 120-kVp images, while maintaining kidney contrast enhancement comparable to that on 120-kVp images. PMID:25881852
Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira
2013-01-01
Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
An image compression algorithm for a high-resolution digital still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.
NASA Astrophysics Data System (ADS)
Hanel, A.; Stilla, U.
2017-05-01
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
The effect of microchannel plate gain depression on PAPA photon counting cameras
NASA Astrophysics Data System (ADS)
Sams, Bruce J., III
1991-03-01
PAPA (precision analog photon address) cameras are photon counting imagers which employ microchannel plates (MCPs) for image intensification. They have been used extensively in astronomical speckle imaging. The PAPA camera can produce artifacts when light incident on its MCP is highly concentrated. The effect is exacerbated by adjusting the strobe detection level too low, so that the camera accepts very small MCP pulses. The artifacts can occur even at low total count rates if the image has highly a concentrated bright spot. This paper describes how to optimize PAPA camera electronics, and describes six techniques which can avoid or minimize addressing errors.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
Method used to test the imaging consistency of binocular camera's left-right optical system
NASA Astrophysics Data System (ADS)
Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui
2016-09-01
To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.
NASA Technical Reports Server (NTRS)
Khlopenkov, Konstantin V.; Duda, David; Thieman, Mandana; Sun-mack, Szedung; Su, Wenying; Minnis, Patrick; Bedka, Kristopher
2017-01-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). EPIC delivers adequate spatial resolution imagery but only in shortwave bands (317-780 nm), while NISTAR measures the top-of-atmosphere (TOA) whole-disk radiance in shortwave and longwave broadband windows. Accurate calculation of albedo and outgoing longwave flux requires a high-resolution scene identification such as the radiance observations and cloud properties retrievals from low earth orbit (LEO, including NASA Terra and Aqua MODIS, Suomi-NPP VIIRS, and NOAA AVHRR) and geosynchronous (GEO, including GOES east and west, METEOSAT, INSAT-3D, MTSAT-2, and Himawari-8) satellite imagers. The cloud properties are derived using the Clouds and the Earth's Radiant Energy System (CERES) mission Cloud Subsystem group algorithms. These properties have to be co-located with EPIC pixels to provide the scene identification and to select anisotropic directional models (ADMs), which are then used to adjust the NISTAR-measured radiance and subsequently obtain the global daytime shortwave and longwave fluxes. This work presents an algorithm for optimal merging of selected radiance and cloud property parameters derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. Selection of satellite data for each 5-km pixel is based on an aggregated rating that incorporates five parameters: nominal satellite resolution, pixel time relative to the EPIC time, viewing zenith angle, distance from day/night terminator, and probability of sun glint. To provide a smoother transition in the merged output, in regions where candidate pixel data from two satellite sources have comparable aggregated rating, the selection decision is defined by the cumulative function of the normal distribution so that abrupt changes in the visual appearance of the composite data are avoided. Higher spatial accuracy in the composite product is achieved by using the inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling.
Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao
Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.
Imaging of breast cancer with mid- and long-wave infrared camera.
Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R
2008-01-01
In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Linkosalmi, Maiju; Melih Tanis, Cemal; Tuovinen, Juha-Pekka; Nadir Arslan, Ali
2018-01-01
In recent years, monitoring of the status of ecosystems using low-cost web (IP) or time lapse cameras has received wide interest. With broad spatial coverage and high temporal resolution, networked cameras can provide information about snow cover and vegetation status, serve as ground truths to Earth observations and be useful for gap-filling of cloudy areas in Earth observation time series. Networked cameras can also play an important role in supplementing laborious phenological field surveys and citizen science projects, which also suffer from observer-dependent observation bias. We established a network of digital surveillance cameras for automated monitoring of phenological activity of vegetation and snow cover in the boreal ecosystems of Finland. Cameras were mounted at 14 sites, each site having 1-3 cameras. Here, we document the network, basic camera information and access to images in the permanent data repository (http://www.zenodo.org/communities/phenology_camera/). Individual DOI-referenced image time series consist of half-hourly images collected between 2014 and 2016 (https://doi.org/10.5281/zenodo.1066862). Additionally, we present an example of a colour index time series derived from images from two contrasting sites.
The imaging system design of three-line LMCCD mapping camera
NASA Astrophysics Data System (ADS)
Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da
2011-08-01
In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.
Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai
2014-01-01
Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Analysis of computer images in the presence of metals
NASA Astrophysics Data System (ADS)
Buzmakov, Alexey; Ingacheva, Anastasia; Prun, Victor; Nikolaev, Dmitry; Chukalina, Marina; Ferrero, Claudio; Asadchikov, Victor
2018-04-01
Artifacts caused by intensely absorbing inclusions are encountered in computed tomography via polychromatic scanning and may obscure or simulate pathologies in medical applications. To improve the quality of reconstruction if high-Z inclusions in presence, previously we proposed and tested with synthetic data an iterative technique with soft penalty mimicking linear inequalities on the photon-starved rays. This note reports a test at the tomographic laboratory set-up at the Institute of Crystallography FSRC "Crystallography and Photonics" RAS in which tomographic scans were successfully made of temporary tooth without inclusion and with Pb inclusion.
Plasma parameters and structures of the X4 flare of 19 May 1984 as observed by SMM-XRP.
NASA Astrophysics Data System (ADS)
Schmelz, J. T.; Saba, J. L. R.; Strong, K. T.
The eruption of a large flare on the east limb of the Sun was observed by the X-ray Polychromator (XRP) on board the Solar Maximum Mission (SMM) on 19 May 1984. The XRP Flat Crystal Spectrometer (FCS) made polychromatic soft X-ray images during the preflare, flare and postflare phases. The XRP Bent Crystal Spectrometer (BCS) provided information on the temperature and dynamics of the hot (Te > 8×106K) coronal plasma from spectra integrated spatially over the whole region.
Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.
Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael
2017-12-01
Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
NASA Astrophysics Data System (ADS)
Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo
2015-04-01
Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.
Engineering design criteria for an image intensifier/image converter camera
NASA Technical Reports Server (NTRS)
Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.
1976-01-01
The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan
2018-06-13
To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.
Selecting a digital camera for telemedicine.
Patricoski, Chris; Ferguson, A Stewart
2009-06-01
The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.
Sky camera geometric calibration using solar observations
Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan
2016-09-05
A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less
DOT National Transportation Integrated Search
2004-10-01
The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...
Camera artifacts in IUE spectra
NASA Technical Reports Server (NTRS)
Bruegman, O. W.; Crenshaw, D. M.
1994-01-01
This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.
A low-cost dual-camera imaging system for aerial applicators
USDA-ARS?s Scientific Manuscript database
Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...
Left Panorama of Spirit's Landing Site
NASA Technical Reports Server (NTRS)
2004-01-01
Left Panorama of Spirit's Landing Site
This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.Generating Stereoscopic Television Images With One Camera
NASA Technical Reports Server (NTRS)
Coan, Paul P.
1996-01-01
Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.
UCXp camera imaging principle and key technologies of data post-processing
NASA Astrophysics Data System (ADS)
Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao
2014-03-01
The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Characteristics of the retinal images of the eye optical systems with implanted intraocular lenses
NASA Astrophysics Data System (ADS)
Siedlecki, Damian; Zając, Marek; Nowak, Jerzy
2007-04-01
Cataract, or opacity of crystalline lens in the human eye is one of the most frequent reasons of blindness nowadays. Removing the pathologically altered crystalline lens and replacing it with artificial implantable intraocular lens (IOL) is practically the only therapy in this illness. There exist a wide variety of artificial IOL types on the medical market, differing in their material and design (shape). In this paper six exemplary models of IOL's made of PMMA, acrylic and silicone are considered. The retinal image quality is analyzed numerically on the basis of Liou-Brennan eye model with these IOL's inserted. Chromatic aberration as well as polychromatic Point Spread Function and Modulation Transfer Function are calculated as most adequate image quality measures. The calculations made with Zemax TM software show the importance of chromatic aberration correction.
Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.
2013-01-01
This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
A new approach for beam hardening correction based on the local spectrum distributions
NASA Astrophysics Data System (ADS)
Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza
2015-09-01
Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Grazing-Incidence Neutron Optics based on Wolter Geometries
NASA Technical Reports Server (NTRS)
Gubarev, M. V.; Ramsey, B. D.; Mildner, D. F. R.
2008-01-01
The feasibility of grazing-incidence neutron imaging optics based on the Wolter geometries have been successfully demonstrated. Biological microscopy, neutron radiography, medical imaging, neutron crystallography and boron neutron capture therapy would benefit from high resolution focusing neutron optics. Two bounce optics can also be used to focus neutrons in SANS experiments. Here, the use of the optics would result in lower values of obtainable scattering angles. The high efficiency of the optics permits a decrease in the minimum scattering vector without lowering the neutron intensity on sample. In this application, a significant advantage of the reflective optics over refractive optics is that the focus is independent of wavelength, so that the technique can be applied to polychromatic beams at pulsed neutron sources.
Chen, Sen; Luo, Sheng Nian
2018-03-01
Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10-100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are explored via Gaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamental harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sen; Luo, Sheng-Nian
Polychromatic X-ray sources can be useful for photon-starved small-angle X-ray scattering given their high spectral fluxes. Their bandwidths, however, are 10–100 times larger than those using monochromators. To explore the feasibility, ideal scattering curves of homogeneous spherical particles for polychromatic X-rays are calculated and analyzed using the Guinier approach, maximum entropy and regularization methods. Monodisperse and polydisperse systems are explored. The influence of bandwidth and asymmetric spectra shape are exploredviaGaussian and half-Gaussian spectra. Synchrotron undulator spectra represented by two undulator sources of the Advanced Photon Source are examined as an example, as regards the influence of asymmetric harmonic shape, fundamentalmore » harmonic bandwidth and high harmonics. The effects of bandwidth, spectral shape and high harmonics on particle size determination are evaluated quantitatively.« less
Galli, C
2001-07-01
It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.
Route constraints model based on polychromatic sets
NASA Astrophysics Data System (ADS)
Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu
2018-03-01
With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.
NASA Astrophysics Data System (ADS)
Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.
2016-03-01
The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.
NASA Technical Reports Server (NTRS)
1992-01-01
This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.
NASA Astrophysics Data System (ADS)
Mi, Yuhe; Huang, Yifan; Li, Lin
2015-08-01
Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
A 3D photographic capsule endoscope system with full field of view
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng
2013-09-01
Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darne, C; Robertson, D; Alsanea, F
2016-06-15
Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less
Spin-to-Orbital Angular Momentum Mapping of Polychromatic Light
NASA Astrophysics Data System (ADS)
Rafayelyan, Mushegh; Brasselet, Etienne
2018-05-01
Reflective geometric phase flat optics made from chiral anisotropic media recently unveiled a promising route towards polychromatic beam shaping. However, these broadband benefits are strongly mitigated by the fact that flipping the incident helicity does not ensure geometric phase reversal. Here we overcome this fundamental limitation by a simple and robust add-on whose advantages are emphasized in the context of spin-to-orbital angular momentum mapping.
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing
NASA Astrophysics Data System (ADS)
Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.
2018-01-01
Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grawe, J.; Amneus, H.; Zetterberg, G.
1993-01-01
The frequencies and DNA distributions of micronuclei in polychromatic erythrocytes from the bone marrow and peripheral blood of mice after four different treatments were determined by flow cytometry. Polychromatic erthrocytes were detected using the fluorescent RNA stain thiazole orange, while micronuclei were detected with the DNA stain Hoechst 33342. The treatments were X-irradiation (1 Gy), cyclophosphamide (30 mg/kg), vincristine sulfphate (0.08 mg/kg), and cochicine (1 mg/kg). All treatments showed increased frequencies of micronucleated polychromatic erythrocytes at 30h after treatment in the bone marrow (colchicine 50h) and at 50h in the peripheral blood. The clostogenic agents X-irradiation and cyclophosphamide and themore » spindle poisons vincristine sulphate and cochicine could be grouped according to the fluorescent characteristics of the induced micronuclei as well as the relative frequency of small (0.5-2% if the diploid G1 DNA content) and large (2-10%) micronuclei. In the peripheral blood the relative frequency of large micronuclei was lower than in the bone marrow, indicating that they were partly eliminated before entrance into the peripheral circulation. The nature of presumed micronuclei was verified by sorting. The potential of this approach to give information on the mechanism of induction of micronuclei is discussed.« less
NASA Astrophysics Data System (ADS)
Sena, G.; Almeida, A. P.; Braz, D.; Nogueira, L. P.; Soares, J.; Azambuja, P.; Gonzalez, M. S.; Tromba, G.; Barroso, R. C.
2015-10-01
The recent years advancements in microtomography have increased the achievable resolution and contrast, making this relatively inexpensive and a widely available technology, potentially useful for studies of insect's internal morphology. Phase Contrast X-Ray Synchrotron Microtomography (SR-PhC-μCT) is a non-destructive technique that allows the microanatomical investigations of Rhodnius prolixus, one of the most important insect vectors of Trypanosoma cruzi, the etiologic agent of Chagas' disease. In Latin America, vector control is the most useful method to prevent Chagas' disease, and a detailed knowledge of R. prolixus' interior structures is crucial for a better understanding of their function and evolution. Traditionally, in both biological morphology and anatomy, the internal structures of whole organisms or parts of them are accessed by dissecting or histological serial sectioning; so studying the internal structures of R. prolixus' head using SR-PhC-μCT is of great importance in researches on vector control. In this work, volume-rendered SR-PhC-μCT images of the heads of selected R. prolixus were obtained using the new set-up available at the SYRMEP beamline of ELETTRA (Trieste, Italy). In this new set-up, the outcoming beam from the ring is restrained before the monochromator and in a devoted end-station, absorption and phase contrast radiography and tomography set-up are available. The images obtained with polychromatic X-ray beam in phase contrast regimen and 2 μm resolution, showed details and organs of R. prolixus never seen before with SR-PhC-μCT.
Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns
NASA Astrophysics Data System (ADS)
Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.
2012-07-01
Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
Image quality assessment for selfies with and without super resolution
NASA Astrophysics Data System (ADS)
Kubota, Aya; Gohshi, Seiichi
2018-04-01
With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging
NASA Astrophysics Data System (ADS)
Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.
2018-04-01
We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.
Neldam, Camilla Albeck; Pinholt, Else Marie
2014-09-01
Today X-ray micro computer tomography (μCT) imaging is used to investigate bone microarchitecture. μCT imaging is obtained by polychromatic X-ray beams, resulting in images with beam hardening artifacts, resolution levels at 10 μm, geometrical blurring, and lack of contrasts. When μCT is coupled to synchrotron sources (SRμCT) a spatial resolution up to one tenth of a μm may be achieved. A review of the literature concerning SRμCT was performed to investigate its usability and its strength in visualizing fine bone structures, vessels, and microarchitecture of bone. Although mainly limited to in vitro examinations, SRμCT is considered as a gold standard to image trabecular bone microarchitecture since it is possible in a 3D manner to visualize fine structural elements within mineralized tissue such as osteon boundaries, rods and plates structures, cement lines, and differences in mineralization. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Automatic Orientation of Large Blocks of Oblique Images
NASA Astrophysics Data System (ADS)
Rupnik, E.; Nex, F.; Remondino, F.
2013-05-01
Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.
Error modeling and analysis of star cameras for a class of 1U spacecraft
NASA Astrophysics Data System (ADS)
Fowler, David M.
As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
Center for Coastline Security Technology, Year 3
2008-05-01
Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection
NASA Astrophysics Data System (ADS)
Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu
2015-04-01
For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.
The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.
ERIC Educational Resources Information Center
McCain, Thomas A.; Wakshlag, Jacob J.
The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…
Development of a piecewise linear omnidirectional 3D image registration method
NASA Astrophysics Data System (ADS)
Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo
2016-12-01
This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.
SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications
NASA Astrophysics Data System (ADS)
Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.
2005-08-01
A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.
A combined microphone and camera calibration technique with application to acoustic imaging.
Legg, Mathew; Bradley, Stuart
2013-10-01
We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.
High Speed Digital Camera Technology Review
NASA Technical Reports Server (NTRS)
Clements, Sandra D.
2009-01-01
A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
Handheld hyperspectral imager system for chemical/biological and environmental applications
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Piatek, Bob
2004-08-01
A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.
Hand-held hyperspectral imager for chemical/biological and environmental applications
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Piatek, Bob
2004-03-01
A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.
Image dynamic range test and evaluation of Gaofen-2 dual cameras
NASA Astrophysics Data System (ADS)
Zhang, Zhenhua; Gan, Fuping; Wei, Dandan
2015-12-01
In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
The Limited Duty/Chief Warrant Officer Professional Guidebook
1985-01-01
subsurface imaging . They plan and manage the operation of imaging commands and activities, combat camera groups and aerial reconnaissance imaging...picture and video systems used in aerial, surface and subsurface imaging . They supervise the operation of imaging commands and activities, combat camera
Test Image of Earth Rocks by Mars Camera Stereo
2010-11-16
This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
The advantages of using a Lucky Imaging camera for observations of microlensing events
NASA Astrophysics Data System (ADS)
Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus
2016-05-01
In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky Imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, for an Earth-mass planet in the resonance regime, where the detection probability in crowded fields is smaller, Lucky Imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.
Li, Jin; Liu, Zilong; Liu, Si
2017-02-20
In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.
Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus
2008-03-01
Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
Volunteers Help Decide Where to Point Mars Camera
2015-07-22
This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
Multispectral image dissector camera flight test
NASA Technical Reports Server (NTRS)
Johnson, B. L.
1973-01-01
It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Lillo, F.; Mettivier, G., E-mail: mettivier@na.infn.it; Sarno, A.
2016-01-15
Purpose: This work investigates the energy response and dose-response curve determinations for XR-QA2 radiochromic film dosimetry system used for synchrotron radiation work and for quality assurance in diagnostic radiology, in the range of effective energies 18–46.5 keV. Methods: Pieces of XR-QA2 films were irradiated, in a plane transverse to the beam axis, with a monochromatic beam of energy in the range 18–40 keV at the ELETTRA synchrotron radiation facility (Trieste, Italy) and with a polychromatic beam from a laboratory x-ray tube operated at 80, 100, and 120 kV. The film calibration curve was expressed as air kerma (measured free-in-air withmore » an ionization chamber) versus the net optical reflectance change (netΔR) derived from the red channel of the RGB scanned film image. Four functional relationships (rational, linear exponential, power, and logarithm) were tested to evaluate the best curve for fitting the calibration data. The adequacy of the various fitting functions was tested by using the uncertainty analysis and by assessing the average of the absolute air kerma error calculated as the difference between calculated and delivered air kerma. The sensitivity of the film was evaluated as the ratio of the change in net reflectance to the corresponding air kerma. Results: The sensitivity of XR-QA2 films increased in the energy range 18–39 keV, with a maximum variation of about 170%, and decreased in the energy range 38–46.5 keV. The present results confirmed and extended previous findings by this and other groups, as regards the dose response of the radiochromic film XR-QA2 to monochromatic and polychromatic x-ray beams, respectively. Conclusions: The XR-QA2 radiochromic film response showed a strong dependence on beam energy for both monochromatic and polychromatic beams in the range of half value layer values from 0.55 to 6.1 mm Al and corresponding effective energies from 18 to 46.5 keV. In this range, the film response varied by 170%, from a minimum sensitivity of 0.0127 to a maximum sensitivity of 0.0219 at 10 mGy air kerma in air. The more suitable function for air kerma calibration of the XR-QA2 radiochromic film was the power function. A significant batch-to-batch variation, up to 55%, in film response at 120 kV (46.5 keV effective energy) was observed in comparison with published data.« less
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
2011-07-01
cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were
Imagers for digital still photography
NASA Astrophysics Data System (ADS)
Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge
2006-04-01
This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.
NASA Astrophysics Data System (ADS)
Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.
2012-05-01
Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.
Person re-identification over camera networks using multi-task distance metric learning.
Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng
2014-08-01
Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
An Example-Based Super-Resolution Algorithm for Selfie Images
William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep
2016-01-01
A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500
Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System
NASA Astrophysics Data System (ADS)
Stebner, K.; Wieden, A.
2014-03-01
Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.
Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608
Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.
Autocalibration of a projector-camera system.
Okatani, Takayuki; Deguchi, Koichiro
2005-12-01
This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.
Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.
2017-01-01
Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.
A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera
NASA Astrophysics Data System (ADS)
Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin
2014-12-01
The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
Accuracy evaluation of optical distortion calibration by digital image correlation
NASA Astrophysics Data System (ADS)
Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan
2017-11-01
Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.
The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory
NASA Technical Reports Server (NTRS)
Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.
2005-01-01
Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.
Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors
NASA Astrophysics Data System (ADS)
Han, Ling
Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.
Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.
1998-07-01
This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).
Volumetric particle image velocimetry with a single plenoptic camera
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.
2015-11-01
A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.
Computer simulation of turbulent jet structure radiography
NASA Astrophysics Data System (ADS)
Kodimer, Kory A.; Parnell, Lynn A.; Nelson, Robert S.; Papin, Patrick J.
1992-12-01
Liquid metal combustion chambers are under consideration as power sources for propulsion devices used in undersea vehicles. Characteristics of the reactive jet are studied to gain information about the internal combustion phenomena, including temporal and spatial variation of the jet flame, and the effects of phase changes on both the combustion and imaging processes. A ray tracing program which employs simplified Monte Carlo methods has been developed for use as a predictive tool for radiographic imaging of closed liquid metal combustors. A complex focal spot is characterized by either a monochromatic or polychromatic emission spectrum. For the simplest case, the x-ray detection system is modeled by an integrating planar detector having 100% efficiency. Several simple geometrical shapes are used to simulate jet structures contained within the combustor, such as cylinders, paraboloids, and ellipsoids. The results of the simulation and real time radiographic images are presented and discussed.
Performance evaluation and clinical applications of 3D plenoptic cameras
NASA Astrophysics Data System (ADS)
Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel
2015-06-01
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
Micro-Imagers for Spaceborne Cell-Growth Experiments
NASA Technical Reports Server (NTRS)
Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen
2006-01-01
A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Real-time vehicle matching for multi-camera tunnel surveillance
NASA Astrophysics Data System (ADS)
Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried
2011-03-01
Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
iPhone 4s and iPhone 5s Imaging of the Eye.
Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L
2017-01-01
To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
Plenoptic Imager for Automated Surface Navigation
NASA Technical Reports Server (NTRS)
Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael
2010-01-01
An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.
NASA Technical Reports Server (NTRS)
Duda, David P.; Khlopenkov, Konstantin V.; Thiemann, Mandana; Palikonda, Rabindra; Sun-Mack, Sunny; Minnis, Patrick; Su, Wenying
2016-01-01
With the launch of the Deep Space Climate Observatory (DSCOVR), new estimates of the daytime Earth radiation budget can be computed from a combination of measurements from the two Earth-observing sensors onboard the spacecraft, the Earth Polychromatic Imaging Camera (EPIC) and the National Institute of Standards and Technology Advanced Radiometer (NISTAR). Although these instruments can provide accurate top-of-atmosphere (TOA) radiance measurements, they lack sufficient resolution to provide details on small-scale surface and cloud properties. Previous studies have shown that these properties have a strong influence on the anisotropy of the radiation at the TOA, and ignoring such effects can result in large TOA-flux errors. To overcome these effects, high-resolution scene identification is needed for accurate Earth radiation budget estimation. Selected radiance and cloud property data measured and derived from several low earth orbit (LEO, including NASA Terra and Aqua MODIS, NOAA AVHRR) and geosynchronous (GEO, including GOES (east and west), METEOSAT, INSAT-3D, MTSAT-2, and HIMAWARI-8) satellite imagers were collected to create hourly 5-km resolution global composites of data necessary to compute angular distribution models (ADM) for reflected shortwave (SW) and longwave (LW) radiation. The satellite data provide an independent source of radiance measurements and scene identification information necessary to construct ADMs that are used to determine the daytime Earth radiation budget. To optimize spatial matching between EPIC measurements and the high-resolution composite cloud properties, LEO/GEO retrievals within the EPIC fields of view (FOV) are convolved to the EPIC point spread function (PSF) in a similar manner to the Clouds and the Earth's Radiant Energy System (CERES) Single Scanner Footprint TOA/Surface Fluxes and Clouds (SSF) product. Examples of the merged LEO/GEO/EPIC product will be presented, describing the chosen radiance and cloud properties and details of how data from the multi-satellite measurements are selected.
NASA Astrophysics Data System (ADS)
Duda, D. P.; Khlopenkov, K. V.; Palikonda, R.; Khaiyer, M. M.; Minnis, P.; Su, W.; Sun-Mack, S.
2016-12-01
With the launch of the Deep Space Climate Observatory (DSCOVR), new estimates of the daytime Earth radiation budget can computed from a combination of measurements from the two Earth-observing sensors onboard the spacecraft, the Earth Polychromatic Imaging Camera (EPIC) and the National Institute of Standards and Technology Advanced Radiometer (NISTAR). Although these instruments can provide accurate top-of-atmosphere (TOA) radiance measurements, they lack sufficient resolution to provide details on small-scale surface and cloud properties. Previous studies have shown that these properties have a strong influence on the anisotropy of the radiation at the TOA, and ignoring such effects can result in large TOA-flux errors. To overcome these effects, high-resolution scene identification is needed for accurate Earth radiation budget estimation. Selected radiance and cloud property data measured and derived from several low earth orbit (LEO, including NASA Terra and Aqua MODIS, NOAA AVHRR) and geosynchronous (GEO, including GOES (east and west), METEOSAT, INSAT-3D, MTSAT-2, and HIMAWARI-8) satellite imagers were collected to create hourly 5-km resolution global composites of data necessary to compute angular distribution models (ADM) for reflected shortwave (SW) and longwave (LW) radiation. The satellite data provide an independent source of radiance measurements and scene identification information necessary to construct ADMs that are used to determine the daytime Earth radiation budget. To optimize spatial matching between EPIC measurements and the high-resolution composite cloud properties, LEO/GEO retrievals within the EPIC fields of view (FOV) are convolved to the EPIC point spread function (PSF) in a similar manner to the Clouds and the Earth's Radiant Energy System (CERES) Single Scanner Footprint TOA/Surface Fluxes and Clouds (SSF) product. Examples of the merged LEO/GEO/EPIC product will be presented, describing the chosen radiance and cloud properties and details of how data from the multi-satellite measurements are selected.
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Fluorescent image tracking velocimeter
Shaffer, Franklin D.
1994-01-01
A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.
HERCULES/MSI: a multispectral imager with geolocation for STS-70
NASA Astrophysics Data System (ADS)
Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta
1995-11-01
A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.
Investigation into the use of photoanthropometry in facial image comparison.
Moreton, Reuben; Morley, Johanna
2011-10-10
Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
A novel super-resolution camera model
NASA Astrophysics Data System (ADS)
Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli
2015-05-01
Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.
Mitigation of Atmospheric Effects on Imaging Systems
2004-03-31
focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted
Imaging Emission Spectra with Handheld and Cellphone Cameras
NASA Astrophysics Data System (ADS)
Sitar, David
2012-12-01
As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.
Laser-sodium interaction for the polychromatic laser guide star project
NASA Astrophysics Data System (ADS)
Bellanger, Veronique; Petit, Alain D.
2002-02-01
We developed a code aimed at determining the laser parameters leading to the maximum return flux of photons at 0.33 micrometers for a polychromatic sodium Laser Guide Star. This software relies upon a full 48-level collisionless and magnetic-field-free density-matrix description of the hyperfine structure of Na and includes Doppler broadening and Zeeman degeneracy. Experimental validation of BEACON was conducted on the SILVA facilities and will also be discussed in this paper.
Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.
ERIC Educational Resources Information Center
Mills, David A.; Kelley, Kevin; Jones, Michael
2001-01-01
Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)
Lincoln Penny on Mars in Camera Calibration Target
2012-09-10
The penny in this image is part of a camera calibration target on NASA Mars rover Curiosity. The MAHLI camera on the rover took this image of the MAHLI calibration target during the 34th Martian day of Curiosity work on Mars, Sept. 9, 2012.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
Memory color of natural familiar objects: effects of surface texture and 3-D shape.
Vurro, Milena; Ling, Yazhu; Hurlbert, Anya C
2013-06-28
Natural objects typically possess characteristic contours, chromatic surface textures, and three-dimensional shapes. These diagnostic features aid object recognition, as does memory color, the color most associated in memory with a particular object. Here we aim to determine whether polychromatic surface texture, 3-D shape, and contour diagnosticity improve memory color for familiar objects, separately and in combination. We use solid three-dimensional familiar objects rendered with their natural texture, which participants adjust in real time to match their memory color for the object. We analyze mean, accuracy, and precision of the memory color settings relative to the natural color of the objects under the same conditions. We find that in all conditions, memory colors deviate slightly but significantly in the same direction from the natural color. Surface polychromaticity, shape diagnosticity, and three dimensionality each improve memory color accuracy, relative to uniformly colored, generic, or two-dimensional shapes, respectively. Shape diagnosticity improves the precision of memory color also, and there is a trend for polychromaticity to do so as well. Differently from other studies, we find that the object contour alone also improves memory color. Thus, enhancing the naturalness of the stimulus, in terms of either surface or shape properties, enhances the accuracy and precision of memory color. The results support the hypothesis that memory color representations are polychromatic and are synergistically linked with diagnostic shape representations.
Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy
NASA Technical Reports Server (NTRS)
1984-01-01
Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.
An image-tube camera for cometary spectrography
NASA Astrophysics Data System (ADS)
Mamadov, O.
The paper discusses the mounting of an image tube camera. The cathode is of antimony, sodium, potassium, and cesium. The parts used for mounting are of acrylic plastic and a fabric-based laminate. A mounting design that does not include cooling is presented. The aperture ratio of the camera is 1:27. Also discussed is the way that the camera is joined to the spectrograph.
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
2013-09-01
Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.
Camera Trajectory fromWide Baseline Images
NASA Astrophysics Data System (ADS)
Havlena, M.; Torii, A.; Pajdla, T.
2008-09-01
Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.
Spacecraft camera image registration
NASA Technical Reports Server (NTRS)
Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)
1987-01-01
A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).
Bio-Inspired Sensing and Imaging of Polarization Information in Nature
2008-05-04
polarization imaging,” Appl. Opt. 36, 150–155 (1997). 5. L. B. Wolff, “Polarization camera for computer vision with a beam splitter ,” J. Opt. Soc. Am. A...vision with a beam splitter ,” J. Opt. Soc. Am. A 11, 2935–2945 (1994). 2. L. B. Wolff and A. G. Andreou, “Polarization camera sensors,” Image Vis. Comput...group we have been developing various man-made, non -invasive imaging methodologies, sensing schemes, camera systems, and visualization and display
A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA
NASA Astrophysics Data System (ADS)
Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred
2016-08-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
An HDR imaging method with DTDI technology for push-broom cameras
NASA Astrophysics Data System (ADS)
Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin
2018-03-01
Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.
Super-resolved refocusing with a plenoptic camera
NASA Astrophysics Data System (ADS)
Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu
2011-03-01
This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
Novel computer-based endoscopic camera
NASA Astrophysics Data System (ADS)
Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia
1995-05-01
We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
Artifact Reduction in X-Ray CT Images of Al-Steel-Perspex Specimens Mimicking a Hip Prosthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madhogarhia, Manish; Munshi, P.; Lukose, Sijo
2008-09-26
X-ray Computed Tomography (CT) is a relatively new technique developed in the late 1970's, which enables the nondestructive visualization of the internal structure of objects. Beam hardening caused by the polychromatic spectrum is an important problem in X-ray computed tomography (X-CT). It leads to various artifacts in reconstruction images and reduces image quality. In the present work we are considering the Artifact Reduction in Total Hip Prosthesis CT Scan which is a problem of medical imaging. We are trying to reduce the cupping artifact induced by beam hardening as well as metal artifact as they exist in the CT scanmore » of a human hip after the femur is replaced by a metal implant. The correction method for beam hardening used here is based on a previous work. Simulation study for the present problem includes a phantom consisting of mild steel, aluminium and perspex mimicking the photon attenuation properties of a hum hip cross section with metal implant.« less
Medipix-based Spectral Micro-CT.
Yu, Hengyong; Xu, Qiong; He, Peng; Bennett, James; Amir, Raja; Dobbs, Bruce; Mou, Xuanqin; Wei, Biao; Butler, Anthony; Butler, Phillip; Wang, Ge
2012-12-01
Since Hounsfield's Nobel Prize winning breakthrough decades ago, X-ray CT has been widely applied in the clinical and preclinical applications - producing a huge number of tomographic gray-scale images. However, these images are often insufficient to distinguish crucial differences needed for diagnosis. They have poor soft tissue contrast due to inherent photon-count issues, involving high radiation dose. By physics, the X-ray spectrum is polychromatic, and it is now feasible to obtain multi-energy, spectral, or true-color, CT images. Such spectral images promise powerful new diagnostic information. The emerging Medipix technology promises energy-sensitive, high-resolution, accurate and rapid X-ray detection. In this paper, we will review the recent progress of Medipix-based spectral micro-CT with the emphasis on the results obtained by our team. It includes the state- of-the-art Medipix detector, the system and method of a commercial MARS (Medipix All Resolution System) spectral micro-CT, and the design and color diffusion of a hybrid spectral micro-CT.
NASA Astrophysics Data System (ADS)
Dhiman, I.; Ziesche, Ralf; Wang, Tianhao; Bilheux, Hassina; Santodonato, Lou; Tong, X.; Jiang, C. Y.; Manke, Ingo; Treimer, Wolfgang; Chatterji, Tapan; Kardjilov, Nikolay
2017-09-01
In the present study, we report a new setup for polarized neutron imaging at the ORNL High Flux Isotope Reactor CG-1D beamline using an in situ 3He polarizer and analyzer. This development is very important for extending the capabilities of the imaging instrument at ORNL providing a polarized beam with a large field-of-view, which can be further used in combination with optical devices like Wolter optics, focusing guides, or other lenses for the development of microscope arrangement. Such a setup can be of advantage for the existing and future imaging beamlines at the pulsed neutron sources. The first proof-of-concept experiment is performed to study the ferromagnetic phase transition in the Fe3Pt sample. We also demonstrate that the polychromatic neutron beam in combination with in situ 3He cells can be used as the initial step for the rapid measurement and qualitative analysis of radiographs.
Dhiman, I; Ziesche, Ralf; Wang, Tianhao; Bilheux, Hassina; Santodonato, Lou; Tong, X; Jiang, C Y; Manke, Ingo; Treimer, Wolfgang; Chatterji, Tapan; Kardjilov, Nikolay
2017-09-01
In the present study, we report a new setup for polarized neutron imaging at the ORNL High Flux Isotope Reactor CG-1D beamline using an in situ 3 He polarizer and analyzer. This development is very important for extending the capabilities of the imaging instrument at ORNL providing a polarized beam with a large field-of-view, which can be further used in combination with optical devices like Wolter optics, focusing guides, or other lenses for the development of microscope arrangement. Such a setup can be of advantage for the existing and future imaging beamlines at the pulsed neutron sources. The first proof-of-concept experiment is performed to study the ferromagnetic phase transition in the Fe 3 Pt sample. We also demonstrate that the polychromatic neutron beam in combination with in situ 3 He cells can be used as the initial step for the rapid measurement and qualitative analysis of radiographs.
A digital ISO expansion technique for digital cameras
NASA Astrophysics Data System (ADS)
Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong
2010-01-01
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
NASA Astrophysics Data System (ADS)
Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.
2017-02-01
In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.
Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics
NASA Astrophysics Data System (ADS)
Furxhi, Orges; Frascati, Joe; Driggers, Ronald
2018-04-01
Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.
NPS assessment of color medical image displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-10-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
NASA Astrophysics Data System (ADS)
Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.
2007-02-01
The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
Research on inosculation between master of ceremonies or players and virtual scene in virtual studio
NASA Astrophysics Data System (ADS)
Li, Zili; Zhu, Guangxi; Zhu, Yaoting
2003-04-01
A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.
Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter
2017-01-01
Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038
Introducing the depth transfer curve for 3D capture system characterization
NASA Astrophysics Data System (ADS)
Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas
2011-03-01
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
Using DSLR cameras in digital holography
NASA Astrophysics Data System (ADS)
Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge
2017-08-01
In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.
Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm
NASA Astrophysics Data System (ADS)
Lahamy, H.; Lichti, D.
2011-09-01
Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.
Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C
2015-08-01
Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.
Super-resolved all-refocused image with a plenoptic camera
NASA Astrophysics Data System (ADS)
Wang, Xiang; Li, Lin; Hou, Guangqi
2015-12-01
This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging
NASA Astrophysics Data System (ADS)
Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.
2006-12-01
In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Performance evaluation of low-cost airglow cameras for mesospheric gravity wave measurements
NASA Astrophysics Data System (ADS)
Suzuki, S.; Shiokawa, K.
2016-12-01
Atmospheric gravity waves significantly contribute to the wind/thermal balances in the mesosphere and lower thermosphere (MLT) through their vertical transport of horizontal momentum. It has been reported that the gravity wave momentum flux preferentially associated with the scale of the waves; the momentum fluxes of the waves with a horizontal scale of 10-100 km are particularly significant. Airglow imaging is a useful technique to observe two-dimensional structure of small-scale (<100 km) gravity waves in the MLT region and has been used to investigate global behaviour of the waves. Recent studies with simultaneous/multiple airglow cameras have derived spatial extent of the MLT waves. Such network imaging observations are advantageous to ever better understanding of coupling between the lower and upper atmosphere via gravity waves. In this study, we newly developed low-cost airglow cameras to enlarge the airglow imaging network. Each of the cameras has a fish-eye lens with a 185-deg field-of-view and equipped with a CCD video camera (WATEC WAT-910HX) ; the camera is small (W35.5 x H36.0 x D63.5 mm) and inexpensive, much more than the airglow camera used for the existing ground-based network (Optical Mesosphere Thermosphere Imagers (OMTI) operated by Solar-Terrestrial Environmental Laboratory, Nagoya University), and has a CCD sensor with 768 x 494 pixels that is highly sensitive enough to detect the mesospheric OH airglow emission perturbations. In this presentation, we will report some results of performance evaluation of this camera made at Shigaraki (35-deg N, 136-deg E), Japan, where is one of the OMTI station. By summing 15-images (i.e., 1-min composition of the images) we recognised clear gravity wave patterns in the images with comparable quality to the OMTI's image. Outreach and educational activities based on this research will be also reported.
Digital Camera Control for Faster Inspection
NASA Technical Reports Server (NTRS)
Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel
2009-01-01
Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
Dense Region of Impact Craters
2011-09-23
NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.
Low-cost printing of computerised tomography (CT) images where there is no dedicated CT camera.
Tabari, Abdulkadir M
2007-01-01
Many developing countries still rely on conventional hard copy images to transfer information among physicians. We have developed a low-cost alternative method of printing computerised tomography (CT) scan images where there is no dedicated camera. A digital camera is used to photograph images from the CT scan screen monitor. The images are then transferred to a PC via a USB port, before being printed on glossy paper using an inkjet printer. The method can be applied to other imaging modalities like ultrasound and MRI and appears worthy of emulation elsewhere in the developing world where resources and technical expertise are scarce.
A small field of view camera for hybrid gamma and optical imaging
NASA Astrophysics Data System (ADS)
Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.
2014-12-01
The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.
Sensor noise camera identification: countering counter-forensics
NASA Astrophysics Data System (ADS)
Goljan, Miroslav; Fridrich, Jessica; Chen, Mo
2010-01-01
In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
NASA Astrophysics Data System (ADS)
Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel
2012-10-01
The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.
NASA Technical Reports Server (NTRS)
Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)
1998-01-01
A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Development of Automated Tracking System with Active Cameras for Figure Skating
NASA Astrophysics Data System (ADS)
Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi
This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.
NASA Astrophysics Data System (ADS)
Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David
2012-06-01
Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.
Calibration of Action Cameras for Photogrammetric Purposes
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-01-01
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898
Calibration of action cameras for photogrammetric purposes.
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-09-18
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-01-01
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
2004-03-13
This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. Earth is the tiny white dot in the center. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. http://photojournal.jpl.nasa.gov/catalog/PIA05560
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika
2015-09-01
In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.
Semi-automated camera trap image processing for the detection of ungulate fence crossing events.
Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija
2017-09-27
Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.
iPhone 4s and iPhone 5s Imaging of the Eye
Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.
2017-01-01
Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604
Portable, low-priced retinal imager for eye disease screening
NASA Astrophysics Data System (ADS)
Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto
2014-02-01
The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.
Optical Transient Monitor (OTM) for BOOTES Project
NASA Astrophysics Data System (ADS)
Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.
2003-04-01
The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.
Noise and sensitivity of x-ray framing cameras at Nike (abstract)
NASA Astrophysics Data System (ADS)
Pawley, C. J.; Deniz, A. V.; Lehecka, T.
1999-01-01
X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.
Recognizable-image selection for fingerprint recognition with a mobile-device camera.
Lee, Dongjae; Choi, Kyoungtaek; Choi, Heeseung; Kim, Jaihie
2008-02-01
This paper proposes a recognizable-image selection algorithm for fingerprint-verification systems that use a camera embedded in a mobile device. A recognizable image is defined as the fingerprint image which includes the characteristics that are sufficiently discriminating an individual from other people. While general camera systems obtain focused images by using various gradient measures to estimate high-frequency components, mobile cameras cannot acquire recognizable images in the same way because the obtained images may not be adequate for fingerprint recognition, even if they are properly focused. A recognizable image has to meet the following two conditions: First, valid region in the recognizable image should be large enough compared with other nonrecognizable images. Here, a valid region is a well-focused part, and ridges in the region are clearly distinguishable from valleys. In order to select valid regions, this paper proposes a new focus-measurement algorithm using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry of gradient distribution. Second, rolling and pitching degrees of a finger measured from the camera plane should be within some limit for a recognizable image. The position of a core point and the contour of a finger are used to estimate the degrees of rolling and pitching. Experimental results show that our proposed method selects valid regions and estimates the degrees of rolling and pitching properly. In addition, fingerprint-verification performance is improved by detecting the recognizable images.
NPS assessment of color medical displays using a monochromatic CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Gu, Xiliang; Fan, Jiahua
2012-02-01
This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.
Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask
NASA Astrophysics Data System (ADS)
Morel, Sébastien
2004-09-01
A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
A position and attitude vision measurement system for wind tunnel slender model
NASA Astrophysics Data System (ADS)
Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi
2014-11-01
A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
Embedded processor extensions for image processing
NASA Astrophysics Data System (ADS)
Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy
2008-04-01
The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.
A new compact, high sensitivity neutron imaging systema)
NASA Astrophysics Data System (ADS)
Caillaud, T.; Landoas, O.; Briat, M.; Rossé, B.; Thfoin, I.; Philippe, F.; Casner, A.; Bourgade, J. L.; Disdier, L.; Glebov, V. Yu.; Marshall, F. J.; Sangster, T. C.; Park, H. S.; Robey, H. F.; Amendt, P.
2012-10-01
We have developed a new small neutron imaging system (SNIS) diagnostic for the OMEGA laser facility. The SNIS uses a penumbral coded aperture and has been designed to record images from low yield (109-1010 neutrons) implosions such as those using deuterium as the fuel. This camera was tested at OMEGA in 2009 on a rugby hohlraum energetics experiment where it recorded an image at a yield of 1.4 × 1010. The resolution of this image was 54 μm and the camera was located only 4 meters from target chamber centre. We recently improved the instrument by adding a cooled CCD camera. The sensitivity of the new camera has been fully characterized using a linear accelerator and a 60Co γ-ray source. The calibration showed that the signal-to-noise ratio could be improved by using raw binning detection.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Novel Robotic Tools for Piping Inspection and Repair, Phase 1
2014-02-13
35 Figure 57 - Accowle ODVS cross section and reflective path ......................................... 36 Figure 58 - Leopard Imaging HD...mounted to iPhone ............................................................................. 39 Figure 63 - Kogeto mounted to Leopard Imaging HD...40 Figure 65 - Leopard Imaging HD camera pipe test (letters) ............................................. 40 Figure 66 - Leopard Imaging HD camera
A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
Two-Camera Acquisition and Tracking of a Flying Target
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter
2008-01-01
A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.
NASA Technical Reports Server (NTRS)
Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)
1985-01-01
Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
NGEE Arctic Zero Power Warming PhenoCamera Images, Barrow, Alaska, 2016
Shawn Serbin; Andrew McMahon; Keith Lewin; Kim Ely; Alistair Rogers
2016-11-14
StarDot NetCam SC pheno camera images collected from the top of the Barrow, BEO Sled Shed. The camera was installed to monitor the BNL TEST group's prototype ZPW (Zero Power Warming) chambers during the growing season of 2016 (including early spring and late fall). Images were uploaded to the BNL FTP server every 10 minutes and renamed with the date and time of the image. See associated data "Zero Power Warming (ZPW) Chamber Prototype Measurements, Barrow, Alaska, 2016" http://dx.doi.org/10.5440/1343066.
Low-cost laser speckle contrast imaging of blood flow using a webcam.
Richards, Lisa M; Kazmi, S M Shams; Davis, Janel L; Olin, Katherine E; Dunn, Andrew K
2013-01-01
Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion.
Low-cost laser speckle contrast imaging of blood flow using a webcam
Richards, Lisa M.; Kazmi, S. M. Shams; Davis, Janel L.; Olin, Katherine E.; Dunn, Andrew K.
2013-01-01
Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion. PMID:24156082
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
Performance benefits and limitations of a camera network
NASA Astrophysics Data System (ADS)
Carr, Peter; Thomas, Paul J.; Hornsey, Richard
2005-06-01
Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.
Sun, Tao; Fezzaa, Kamel
2016-06-17
Here, a high-speed X-ray diffraction technique was recently developed at the 32-ID-B beamline of the Advanced Photon Source for studying highly dynamic, yet non-repeatable and irreversible, materials processes. In experiments, the microstructure evolution in a single material event is probed by recording a series of diffraction patterns with extremely short exposure time and high frame rate. Owing to the limited flux in a short pulse and the polychromatic nature of the incident X-rays, analysis of the diffraction data is challenging. Here, HiSPoD, a stand-alone Matlab-based software for analyzing the polychromatic X-ray diffraction data from polycrystalline samples, is described. With HiSPoD,more » researchers are able to perform diffraction peak indexing, extraction of one-dimensional intensity profiles by integrating a two-dimensional diffraction pattern, and, more importantly, quantitative numerical simulations to obtain precise sample structure information.« less
Developments in mercuric iodide gamma ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patt, B.E.; Beyerle, A.G.; Dolin, R.C.
A mercuric iodide gamma-ray imaging array and camera system previously described has been characterized for spatial and energy resolution. Based on this data a new camera is being developed to more fully exploit the potential of the array. Characterization results and design criterion for the new camera will be presented. 2 refs., 7 figs.
Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision
ERIC Educational Resources Information Center
Prull, Matthew W.; Banks, William P.
2005-01-01
We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…
USDA-ARS?s Scientific Manuscript database
This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
2012-03-08
to-Use 3-D Camera For Measurements in Turbulent Flow Fields B Thurow, Auburn Near Mid Far Conventional imaging Plenoptic imaging Conventional 2...depth-of-field and blur Reduced aperture (restricted angular information) leads to low signal levels Lightfield Imaging Plenoptic camera records
Tenth Anniversary Image from Camera on NASA Mars Orbiter
2012-02-29
NASA Mars Odyssey spacecraft captured this image on Feb. 19, 2012, 10 years to the day after the camera recorded its first view of Mars. This image covers an area in the Nepenthes Mensae region north of the Martian equator.
Full-Frame Reference for Test Photo of Moon
NASA Technical Reports Server (NTRS)
2005-01-01
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images. Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across. The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.Electronic cameras for low-light microscopy.
Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith
2013-01-01
This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.
Plume propagation direction determination with SO2 cameras
NASA Astrophysics Data System (ADS)
Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich
2017-03-01
SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.
Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2018-05-01
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung
2017-02-01
A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.
Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras
NASA Technical Reports Server (NTRS)
Amer, Tahani R.; Goad, William K.
2005-01-01
Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.
Integration of USB and firewire cameras in machine vision applications
NASA Astrophysics Data System (ADS)
Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard
1999-08-01
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
Modulated CMOS camera for fluorescence lifetime microscopy.
Chen, Hongtao; Holst, Gerhard; Gratton, Enrico
2015-12-01
Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhang, Shaojun; Xu, Xiping
2015-10-01
The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.
Performance evaluation of a two detector camera for real-time video.
Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo
2016-12-20
Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.
Use of a color CMOS camera as a colorimeter
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Redford, Gary R.
2006-08-01
In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.
Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S
2001-04-01
Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per second was 1.6 times that at 60,000 counts per second. This feasibility study showed that the HYPER electronic concept works; it can significantly increase the count-rate capability and dose efficiency of gamma cameras. In a larger clinical camera, multiple HYPER-Anger circuits may be implemented to further improve the imaging counting rates that we have shown by multiple times. This technology would facilitate the use of gamma cameras for radionuclide therapy dosimetry imaging, cardiac first-pass imaging, and positron coincidence imaging and the simultaneous acquisition of transmission and emission data using different isotopes with less cross-contamination between transmission and emission data.
High-resolution ophthalmic imaging system
Olivier, Scot S.; Carrano, Carmen J.
2007-12-04
A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.
Blood pulsation measurement using cameras operating in visible light: limitations.
Koprowski, Robert
2016-10-03
The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
NASA Astrophysics Data System (ADS)
Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute
1998-04-01
Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
Thermographic measurements of high-speed metal cutting
NASA Astrophysics Data System (ADS)
Mueller, Bernhard; Renz, Ulrich
2002-03-01
Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Thermal feature extraction of servers in a datacenter using thermal image registration
NASA Astrophysics Data System (ADS)
Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan
2017-09-01
Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.
In vitro near-infrared imaging of occlusal dental caries using a germanium-enhanced CMOS camera
NASA Astrophysics Data System (ADS)
Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel
2010-02-01
The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.
In vitro near-infrared imaging of occlusal dental caries using germanium enhanced CMOS camera.
Lee, Chulsung; Darling, Cynthia L; Fried, Daniel
2010-03-01
The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
Curiosity ChemCam Removes Dust
2013-04-08
This pair of images taken a few minutes apart show how laser firing by NASA Mars rover Curiosity removes dust from the surface of a rock. The images were taken by the remote micro-imager camera in the laser-firing Chemistry and Camera ChemCam.
ERIC Educational Resources Information Center
Zetie, K. P.
2017-01-01
In basic physics, often in their first year of study of the subject, students meet the concept of an image, for example when using pinhole cameras and finding the position of an image in a mirror. They are also familiar with the term in photography and design, through software which allows image manipulation, even "in-camera" on most…
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.; Jorge, Jorge M.
1998-01-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
NASA Astrophysics Data System (ADS)
Costa, Manuel F.; Jorge, Jorge M.
1997-12-01
The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.
NASA Astrophysics Data System (ADS)
Zhang, Rumin; Liu, Peng; Liu, Dijun; Su, Guobin
2015-12-01
In this paper, we establish a forward simulation model of plenoptic camera which is implemented by inserting a micro-lens array in a conventional camera. The simulation model is used to emulate how the space objects at different depths are imaged by the main lens then remapped by the micro-lens and finally captured on the 2D sensor. We can easily modify the parameters of the simulation model such as the focal lengths and diameters of the main lens and micro-lens and the number of micro-lens. Employing the spatial integration, the refocused images and all-in-focus images are rendered based on the plenoptic images produced by the model. The forward simulation model can be used to determine the trade-offs between different configurations and to test any new researches related to plenoptic camera without the need of prototype.
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.
2015-05-01
How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.
Wavefront Sensing with the Fine Guidance Sensor for James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Smith, J. Scott; Aronstein, David; Dean, Bruce H.; Howard,Joe; Shiri, Ron
2008-01-01
An analysis is presented that utilizes the Fine Guidance Sensor (FGS) for focal-plane wavefront sensing (WFS) for the James Webb Space Telescope (JWST). WFS with FGS increases the number of wavefront measurements taken in field of the telescope, but has many challenges over the other JWST instruments that make it unique, such as; less sampling of the Point Spread Function (PSF), a smaller diversity-defocus range, a smaller image detector size, and a polychromatic object or source. Additionally, presented is an analysis of sampling for wavefront sensing. Results are shown based on simulations of flight and the cryogenic optical testing at NASA Johnson Space Center.
Portal imaging with flat-panel detector and CCD camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Wai; Dallas, William J.
1997-07-01
This paper provides a comparison of imaging parameters of two portal imaging systems at 6 MV: a flat panel detector and a CCD-camera based portal imaging system. Measurements were made of the signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. Both systems have a linear response with respect to exposure, and the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal- to-noise ratio, which is higher than that observed wit the CCD-camera based portal imaging system. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The paper also presents data on the screen's photon gain (the number of light-photons per interacting x-ray photon), as well as on the magnitude of the Swank-noise, (which describes fluctuation in the screen's photon gain). Images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center, were generated at an exposure of 1 MU. The CCD-camera based system permits detection of aluminum-holes of 0.01194 cm diameter and 0.228 mm depth while the flat-panel detector permits detection of aluminum holes of 0.01194 cm diameter and 0.1626 mm depth, indicating a better signal-to-noise ratio. Rank order filtering was applied to the raw images from the CCD-based system in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-camera and generate 'salt and pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise.
Radiometric calibration of wide-field camera system with an application in astronomy
NASA Astrophysics Data System (ADS)
Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika
2017-09-01
Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.
Prediction of Viking lander camera image quality
NASA Technical Reports Server (NTRS)
Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.
1976-01-01
Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Suzuki, Mayumi; Kato, Katsuhiko; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Ogata, Yoshimune; Hatazawa, Jun
2016-09-01
Although iodine 131 (I-131) is used for radionuclide therapy, high resolution images are difficult to obtain with conventional gamma cameras because of the high energy of I-131 gamma photons (364 keV). Cerenkov-light imaging is a possible method for beta emitting radionuclides, and I-131 (606 MeV maximum beta energy) is a candidate to obtain high resolution images. We developed a high energy gamma camera system for I-131 radionuclide and combined it with a Cerenkov-light imaging system to form a gamma-photon/Cerenkov-light hybrid imaging system to compare the simultaneously measured images of these two modalities. The high energy gamma imaging detector used 0.85-mm×0.85-mm×10-mm thick GAGG scintillator pixels arranged in a 44×44 matrix with a 0.1-mm thick reflector and optical coupled to a Hamamatsu 2 in. square position sensitive photomultiplier tube (PSPMT: H12700 MOD). The gamma imaging detector was encased in a 2 cm thick tungsten shield, and a pinhole collimator was mounted on its top to form a gamma camera system. The Cerenkov-light imaging system was made of a high sensitivity cooled CCD camera. The Cerenkov-light imaging system was combined with the gamma camera using optical mirrors to image the same area of the subject. With this configuration, we simultaneously imaged the gamma photons and the Cerenkov-light from I-131 in the subjects. The spatial resolution and sensitivity of the gamma camera system for I-131 were respectively 3 mm FWHM and 10 cps/MBq for the high sensitivity collimator at 10 cm from the collimator surface. The spatial resolution of the Cerenkov-light imaging system was 0.64 mm FWHM at 10 cm from the system surface. Thyroid phantom and rat images were successfully obtained with the developed gamma-photon/Cerenkov-light hybrid imaging system, allowing direct comparison of these two modalities. Our developed gamma-photon/Cerenkov-light hybrid imaging system will be useful to evaluate the advantages and disadvantages of these two modalities.
Using Engineering Cameras on Mars Landers and Rovers to Retrieve Atmospheric Dust Loading
NASA Astrophysics Data System (ADS)
Wolfe, C. A.; Lemmon, M. T.
2014-12-01
Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera that included a neutral density filter. Direct images of the Sun provide the ability to measure extinction by dust and ice in the atmosphere. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as lander and rover engineering cameras. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on archival engineering camera data.
Imaging Emission Spectra with Handheld and Cellphone Cameras
ERIC Educational Resources Information Center
Sitar, David
2012-01-01
As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…
Automated camera-phone experience with the frequency of imaging necessary to capture diet.
Arab, Lenore; Winter, Ashley
2010-08-01
Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Beach Observations using Quadcopter Imagery
NASA Astrophysics Data System (ADS)
Yang, Yi-Chung; Wang, Hsing-Yu; Fang, Hui-Ming; Hsiao, Sung-Shan; Tsai, Cheng-Han
2017-04-01
Beaches are the places where the interaction of the land and sea takes place, and it is under the influence of many environmental factors, including meteorological and oceanic ones. To understand the evolution or changes of beaches, it may require constant monitoring. One way to monitor the beach changes is to use optical cameras. With careful placements of ground control points, land-based optical cameras, which are inexpensive compared to other remote sensing apparatuses, can be used to survey a relatively large area in a short time. For example, we have used terrestrial optical cameras incorporated with ground control points to monitor beaches. The images from the cameras were calibrated by applying the direct linear transformation, projective transformation, and Sobel edge detector to locate the shoreline. The terrestrial optical cameras can record the beach images continuous, and the shorelines can be satisfactorily identified. However, the terrestrial cameras have some limitations. First, the camera system set a sufficiently high level so that the camera can cover the whole area that is of interest; such a location may not be available. The second limitation is that objects in the image have a different resolution, depending on the distance of objects from the cameras. To overcome these limitations, the present study tested a quadcopter equipped with a down-looking camera to record video and still images of a beach. The quadcopter can be controlled to hover at one location. However, the hovering of the quadcopter can be affected by the wind, since it is not positively anchored to a structure. Although the quadcopter has a gimbal mechanism to damp out tiny shakings of the copter, it will not completely counter movements due to the wind. In our preliminary tests, we have flown the quadcopter up to 500 m high to record 10-minnte video. We then took a 10-minute average of the video data. The averaged image of the coast was blurred because of the time duration of the video and the small movement caused by the quadcopter trying to return to its original position, which is caused by the wind. To solve this problem, the feature detection technique of Speeded Up Robust Features (SURF) method was used on the image of the video, and the resulting image was much sharper than that original image. Next, we extracted the maximum and minimum of RGB value of each pixel, respectively, of the 10-minutes videos. The beach breaker zone showed up in the maximum RGB image as white color areas. Moreover, we were also able to remove the breaker from the images and see the breaker zone bottom features using minimum RGB value of the images. From this test, we also identified the location of the coastline. It was found that the correlation coefficient between the coastline identified by the copter image and that by the ground survey was as high as 0.98. By repeating this copter flight at different times, we could measure the evolution of the coastline.
Image quality evaluation of color displays using a Fovean color camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro
2007-03-01
This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.
Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.
Development of two-framing camera with large format and ultrahigh speed
NASA Astrophysics Data System (ADS)
Jiang, Xiaoguo; Wang, Yuan; Wang, Yi
2012-10-01
High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research
Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Fisheye image rectification using spherical and digital distortion models
NASA Astrophysics Data System (ADS)
Li, Xin; Pi, Yingdong; Jia, Yanling; Yang, Yuhui; Chen, Zhiyong; Hou, Wenguang
2018-02-01
Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.
Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation
NASA Astrophysics Data System (ADS)
Inamoto, Naho; Saito, Hideo
2003-06-01
This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..
NASA Astrophysics Data System (ADS)
Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo
2018-02-01
We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.
A four-lens based plenoptic camera for depth measurements
NASA Astrophysics Data System (ADS)
Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe
2015-04-01
In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.
NASA Astrophysics Data System (ADS)
Jaanimagi, Paul A.
1992-01-01
This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.
Demonstration of in-vivo Multi-Probe Tracker Based on a Si/CdTe Semiconductor Compton Camera
NASA Astrophysics Data System (ADS)
Takeda, Shin'ichiro; Odaka, Hirokazu; Ishikawa, Shin-nosuke; Watanabe, Shin; Aono, Hiroyuki; Takahashi, Tadayuki; Kanayama, Yousuke; Hiromura, Makoto; Enomoto, Shuichi
2012-02-01
By using a prototype Compton camera consisting of silicon (Si) and cadmium telluride (CdTe) semiconductor detectors, originally developed for the ASTRO-H satellite mission, an experiment involving imaging multiple radiopharmaceuticals injected into a living mouse was conducted to study its feasibility for medical imaging. The accumulation of both iodinated (131I) methylnorcholestenol and 85Sr into the mouse's organs was simultaneously imaged by the prototype. This result implies that the Compton camera is expected to become a multi-probe tracker available in nuclear medicine and small animal imaging.
Unconventional techniques of fundus imaging: A review
Shanmugam, Mahesh P; Mishra, Divyansh Kailash Chandra; Rajesh, R; Madhukumar, R
2015-01-01
The methods of fundus examination include direct and indirect ophthalmoscopy and imaging with a fundus camera are an essential part of ophthalmic practice. The usage of unconventional equipment such as a hand-held video camera, smartphone, and a nasal endoscope allows one to image the fundus with advantages and some disadvantages. The advantages of these instruments are the cost-effectiveness, ultra portability and ability to obtain images in a remote setting and share the same electronically. These instruments, however, are unlikely to replace the fundus camera but then would always be an additional arsenal in an ophthalmologist's armamentarium. PMID:26458475
Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing
2014-06-01
price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
The Polychromatic Laser Guide Star: the ELP-OA demonstrator at Observatoire de Haute Provence
NASA Astrophysics Data System (ADS)
Foy, R.; Chatagnat, M.; Dubet, D.; Éric, P.; Eysseric, J.; Foy, F.-C.; Fusco, T.; Girard, J.; Laloge, A.; Le van Suu, A.; Messaoudi, B.; Perruchot, S.; Richaud, P.; Richaud, Y.; Rondeau, X.; Tallon, M.; Thiébaut, É.; Boër, M.
2007-07-01
The correction of the tilt for adaptive optics devices from the only laser guide star can be done with the polychromatic laser guide star. We report the progress of the first demonstrator of the implementation of this concept, at Observatoire de Haute-Provence. We review the last steps of the feasibility studies, the optimization of the laser parameters, and the studies of the implementation at the OHP 1.52m telescope, including the beam propagation to the lasers room to the mesosphere and the algorithms for tip-tilt measurements.
Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W
2016-09-01
Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.
Estimation of Leaf Area Index and its Sunlit Portion from DSCOVR EPIC data
NASA Astrophysics Data System (ADS)
Knyazikhin, Y.; Yang, B.; Mottus, M.; Rautiainen, M.; Stenberg, P.; Yan, L.; Chen, C.; Yan, K.; Park, T.; Myneni, R. B.; Song, W.
2016-12-01
The NASA's Earth Polychromatic Imaging Camera (EPIC) onboard NOAA's Deep Space Climate Observatory (DSCOVR) mission was launched on February 11, 2015 to the Sun-Earth Lagrangian L1 point where it began to collect radiance data of the entire sunlit Earth at 16 km resolution (in equatorial zone) every 65 to 110 min in June 2015. It provides imageries in near backscattering directions with the scattering angle between 168o and 176o at ten UV to Near-IR narrow spectral bands centered at 317.5 (band width 1.0) nm, 325.0 (1.0) nm, 340.0 (3.0) nm, 388.0 (3.0) nm, 433.0 (3.0) nm, 551.0 (3.0) nm, 680.0 (1.7) nm, 687.8 (0.6) nm, 764.0 (1.7) nm and 779.5 (2.0) nm. This poster presents the theoretical basis of the algorithm designed for the generation of leaf area index (LAI) and diurnal course of sunlit leaf area index (SLAI) from EPIC Bidirectional Reflectance Factor of vegetated land. LAI and SLAI are defined as the total hemi-surface and sunlit leaf semi-surface per unit ground area. Whereas LAI is a standard product of many satellite the SLAI is a new satellite-derived parameter. Sunlit and shaded leaves exhibit different radiative response to incident Photosynthetically Active Radiation (400-700 nm), which in turn triggers various physiological and physical processes required for the functioning of plants. Leaf area and its sunlit portion are key state parameters in most ecosystem productivity and carbon/nitrogen cycle. Status of the EPIC LAI/SLAI product and its validation strategy are also discussed in this poster.
Vegetation Earth System Data Record from DSCOVR EPIC Observations
NASA Astrophysics Data System (ADS)
Knyazikhin, Y.; Song, W.; Yang, B.; Mottus, M.; Rautiainen, M.; Stenberg, P.
2017-12-01
The NASA's Earth Polychromatic Imaging Camera (EPIC) onboard NOAA's Deep Space Climate Observatory (DSCOVR) mission was launched on February 11, 2015 to the Sun-Earth Lagrangian L1 point where it began to collect radiance data of the entire sunlit Earth every 65 to 110 min in June 2015. It provides imageries in near backscattering directions with the scattering angle between 168° and 176° at ten ultraviolet to near infrared (NIR) narrow spectral bands centered at 317.5 (band width 1.0) nm, 325.0 (2.0) nm, 340.0 (3.0) nm, 388.0 (3.0) nm, 433.0 (3.0) nm, 551.0 (3.0) nm, 680.0 (3.0) nm, 687.8 (0.8) nm, 764.0 (1.0) nm and 779.5 (2.0) nm. This poster presents current status of the Vegetation Earth System Data Record of global Leaf Area Index (LAI), solar zenith angle dependent Sunlit Leaf Area Index (SLAI), Fraction vegetation absorbed Photosynthetically Active Radiation (FPAR) and Normalized Difference Vegetation Index (NDVI) derived from the DSCOVR EPIC observations. Whereas LAI is a standard product of many satellite missions, the SLAI is a new satellite-derived parameter. Sunlit and shaded leaves exhibit different radiative response to incident Photosynthetically Active Radiation (400-700 nm), which in turn triggers various physiological and physical processes required for the functioning of plants. FPAR, LAI and SLAI are key state parameters in most ecosystem productivity models and carbon/nitrogen cycle. The product at 10 km sinusoidal grid and 65 to 110 min temporal frequency as well as accompanying Quality Assessment (QA) variables will be publicly available from the NASA Langley Atmospheric Science Data Center. The Algorithm Theoretical Basis (ATBD) and product validation strategy are also discussed in this poster.
NASA Technical Reports Server (NTRS)
Haney, Conor; Doeling, David; Minnis, Patrick; Bhatt, Rajendra; Scarino, Benjamin; Gopalan, Arun
2016-01-01
The Deep Space Climate Observatory (DSCOVR), launched on 11 February 2015, is a satellite positioned near the Lagrange-1 (L1) point, carrying several instruments that monitor space weather, and Earth-view sensors designed for climate studies. The Earth Polychromatic Imaging Camera (EPIC) onboard DSCOVR continuously views the sun-illuminated portion of the Earth with spectral coverage in the UV, VIS, and NIR bands. Although the EPIC instrument does not have any onboard calibration abilities, its constant view of the sunlit Earth disk provides a unique opportunity for simultaneous viewing with several other satellite instruments. This arrangement allows the EPIC sensor to be inter-calibrated using other well-characterized satellite instrument reference standards. Two such instruments with onboard calibration are MODIS, flown on Aqua and Terra, and VIIRS, onboard Suomi-NPP. The MODIS and VIIRS reference calibrations will be transferred to the EPIC instrument using both all-sky ocean and deep convective clouds (DCC) ray-matched EPIC and MODIS/VIIRS radiance pairs. An automated navigation correction routine was developed to more accurately align the EPIC and MODIS/VIIRS granules. The automated navigation correction routine dramatically reduced the uncertainty of the resulting calibration gain based on the EPIC and MODIS/VIIRS radiance pairs. The SCIAMACHY-based spectral band adjustment factors (SBAF) applied to the MODIS/ VIIRS radiances were found to successfully adjust the reference radiances to the spectral response of the specific EPIC channel for over-lapping spectral channels. The SBAF was also found to be effective for the non-overlapping EPIC channel 10. Lastly, both ray-matching techniques found no discernable trends for EPIC channel 7 over the year of publically released EPIC data.
A Relationship Between Visible and Near-IR Global Spectral Reflectance based on DSCOVR/EPIC
NASA Astrophysics Data System (ADS)
Wen, G.; Marshak, A.; Song, W.; Knyazikhin, Y.
2017-12-01
The launch of Deep Space Climate Observatory (DSCOVR) to the Earth's first Lagrange point (L1) allows us to see a new perspective of the Earth. The Earth Polychromatic Imaging Camera (EPIC) on the DSCOVR measures the back scattered radiation of the entire sunlit side of the Earth at 10 narrow band wavelengths ranging from ultraviolet to visible and near-infrared. We analyzed EPIC global averaged reflectance data. We found that the global averaged visible reflectance has a unique non-linear relationship with near infrared (NIR) reflectance. This non-linear relationship was not observed by any other satellite observations due to a limited spatial and temporal coverage of either low earth orbit (LEO) or geostationary satellite. The non-linear relationship is associated with the changing in the coverages of ocean, cloud, land, and vegetation as the Earth rotates. We used Terra and Aqua MODIS daily global radiance data to simulate EPIC observations. Since MODIS samples the Earth in a limited swath (2330km cross track) at a specific local time (10:30 am for Terra, 1:30 pm for Aqua) with approximately 15 orbits per day, the global average reflectance at a given time may be approximated by averaging the reflectance in the MODIS nearest-time swaths in the sunlit hemisphere. We found that MODIS simulated global visible and NIR spectral reflectance captured the major feature of the EPIC observed non-linear relationship with some errors. The difference between the two is mainly due to the sampling limitation of polar satellite. This suggests that that EPIC observations can be used to reconstruct MODIS global average reflectance time series for studying Earth system change in the past decade.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
High-Resolution Mars Camera Test Image of Moon (Infrared)
NASA Technical Reports Server (NTRS)
2005-01-01
This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.NASA Astrophysics Data System (ADS)
Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.
2011-12-01
The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.
Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects
NASA Astrophysics Data System (ADS)
Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.
2013-07-01
As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.
High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.
2000-01-01
As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Clementine Images of Earth and Moon
NASA Technical Reports Server (NTRS)
1997-01-01
During its flight and lunar orbit, the Clementine spacecraft returned images of the planet Earth and the Moon. This collection of UVVIS camera Clementine images shows the Earth from the Moon and 3 images of the Earth.
The image on the left shows the Earth as seen across the lunar north pole; the large crater in the foreground is Plaskett. The Earth actually appeared about twice as far above the lunar horizon as shown. The top right image shows the Earth as viewed by the UVVIS camera while Clementine was in transit to the Moon; swirling white cloud patterns indicate storms. The two views of southeastern Africa were acquired by the UVVIS camera while Clementine was in low Earth orbit early in the missionImage Intensifier Modules For Use With Commercially Available Solid State Cameras
NASA Astrophysics Data System (ADS)
Murphy, Howard; Tyler, Al; Lake, Donald W.
1989-04-01
A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.
Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989
NASA Astrophysics Data System (ADS)
Csorba, Illes P.
Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp
Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signalsmore » for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors’ method based on the use of a commercially available color camera is useful to evaluate and understand the display performances of both monochrome and color LCDs in radiology departments.« less
The Panoramic Camera (Pancam) Investigation on the NASA 2003 Mars Exploration Rover Mission
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Dingizian, A.; Brown, D.; Morris, R. V.; Arneson, H. M.; Johnson, M. J.
2003-01-01
The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360 of azimuth and from zenith to nadir, providing a complete view of the scene around the rover.
Extreme Faint Flux Imaging with an EMCCD
NASA Astrophysics Data System (ADS)
Daigle, Olivier; Carignan, Claude; Gach, Jean-Luc; Guillaume, Christian; Lessard, Simon; Fortin, Charles-Anthony; Blais-Ouellette, Sébastien
2009-08-01
An EMCCD camera, designed from the ground up for extreme faint flux imaging, is presented. CCCP, the CCD Controller for Counting Photons, has been integrated with a CCD97 EMCCD from e2v technologies into a scientific camera at the Laboratoire d’Astrophysique Expérimentale (LAE), Université de Montréal. This new camera achieves subelectron readout noise and very low clock-induced charge (CIC) levels, which are mandatory for extreme faint flux imaging. It has been characterized in laboratory and used on the Observatoire du Mont Mégantic 1.6 m telescope. The performance of the camera is discussed and experimental data with the first scientific data are presented.
The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector
Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...
2014-06-11
We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1991-01-01
Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.
Flight Calibration of the LROC Narrow Angle Camera
NASA Astrophysics Data System (ADS)
Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.
2016-04-01
Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.
Skeletal Scintigraphy (Bone Scan)
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
Development of high-speed video cameras
NASA Astrophysics Data System (ADS)
Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk
2001-04-01
Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.