NASA Technical Reports Server (NTRS)
Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.
2002-01-01
The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC II) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC 'Pop-up' Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the Caltech Submillimeter Observatory (CSO) are presented.
NASA Technical Reports Server (NTRS)
Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. Darren; Harper, D. Al; Jhabvala, Murzy D.;
2002-01-01
The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC 11) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC "Pop-Up" Detectors (PUD's) use a unique folding technique to enable a 12 x 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar(Registered Trademark) suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 x 32-element array. Engineering results from the first light run of SHARC II at the CalTech Submillimeter Observatory (CSO) are presented.
Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST
1993-12-06
S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
High-angular-resolution NIR astronomy with large arrays (SHARP I and SHARP II)
NASA Astrophysics Data System (ADS)
Hofmann, Reiner; Brandl, Bernhard; Eckart, Andreas; Eisenhauer, Frank; Tacconi-Garman, Lowell E.
1995-06-01
SHARP I and SHARP II are near infrared cameras for high-angular-resolution imaging. Both cameras are built around a 256 X 256 pixel NICMOS 3 HgCdTe array from Rockwell which is sensitive in the 1 - 2.5 micrometers range. With a 0.05'/pixel scale, they can produce diffraction limited K-band images at 4-m-class telescopes. For a 256 X 256 array, this pixel scale results in a field of view of 12.8' X 12.8' which is well suited for the observation of galactic and extragalactic near-infrared sources. Photometric and low resolution spectroscopic capabilities are added by photometric band filters (J, H, K), narrow band filters ((lambda) /(Delta) (lambda) approximately equals 100) for selected spectral lines, and a CVF ((lambda) /(Delta) (lambda) approximately equals 70). A cold shutter permits short exposure times down to about 10 ms. The data acquisition electronics permanently accepts the maximum frame rate of 8 Hz which is defined by the detector time constants (data rate 1 Mbyte/s). SHARP I has been especially designed for speckle observations at ESO's 3.5 m New Technology Telescope and is in operation since 1991. SHARP II is used at ESO's 3.6 m telescope together with the adaptive optics system COME-ON + since 1993. A new version of SHARP II is presently under test, which incorporates exchangeable camera optics for observations with scales of 0.035, 0.05, and 0.1'/pixel. The first scale extends diffraction limited observations down to the J-band, while the last one provides a larger field of view. To demonstrate the power of the cameras, images of the galactic center obtained with SHARP I, and images of the R136 region in 30 Doradus observed with SHARP II are presented.
NASA Astrophysics Data System (ADS)
Lai, Xiaochun; Meng, Ling-Jian
2018-02-01
In this paper, we present simulation studies for the second-generation MRI compatible SPECT system, MRC-SPECT-II, based on an inverted compound eye (ICE) gamma camera concept. The MRC-SPECT-II system consists of a total of 1536 independent micro-pinhole-camera-elements (MCEs) distributed in a ring with an inner diameter of 6 cm. This system provides a FOV of 1 cm diameter and a peak geometrical efficiency of approximately 1.3% (the typical levels of 0.1%-0.01% found in modern pre-clinical SPECT instrumentations), while maintaining a sub-500 μm spatial resolution. Compared to the first-generation MRC-SPECT system (MRC-SPECT-I) (Cai 2014 Nucl. Instrum. Methods Phys. Res. A 734 147-51) developed in our lab, the MRC-SPECT-II system offers a similar resolution with dramatically improved sensitivity and greatly reduced physical dimension. The latter should allow the system to be placed inside most clinical and pre-clinical MRI scanners for high-performance simultaneous MRI and SPECT imaging.
Design of the high resolution optical instrument for the Pleiades HR Earth observation satellites
NASA Astrophysics Data System (ADS)
Lamard, Jean-Luc; Gaudin-Delrieu, Catherine; Valentini, David; Renard, Christophe; Tournier, Thierry; Laherrere, Jean-Marc
2017-11-01
As part of its contribution to Earth observation from space, ALCATEL SPACE designed, built and tested the High Resolution cameras for the European intelligence satellites HELIOS I and II. Through these programmes, ALCATEL SPACE enjoys an international reputation. Its capability and experience in High Resolution instrumentation is recognised by the most customers. Coming after the SPOT program, it was decided to go ahead with the PLEIADES HR program. PLEIADES HR is the optical high resolution component of a larger optical and radar multi-sensors system : ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. ALCATEL SPACE has been entrusted by CNES with the development of the high resolution camera of the Earth observation satellites PLEIADES HR. The first optical satellite of the PLEIADES HR constellation will be launched in mid-2008, the second will follow in 2009. To minimize the development costs, a mini satellite approach has been selected, leading to a compact concept for the camera design. The paper describes the design and performance budgets of this novel high resolution and large field of view optical instrument with emphasis on the technological features. This new generation of camera represents a breakthrough in comparison with the previous SPOT cameras owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. Recent advances in detector technology, optical fabrication and electronics make it possible for the PLEIADES HR camera to achieve their image quality performance goals while staying within weight and size restrictions normally considered suitable only for much lower performance systems. This camera design delivers superior performance using an innovative low power, low mass, scalable architecture, which provides a versatile approach for a variety of imaging requirements and allows for a wide number of possibilities of accommodation with a mini-satellite class platform.
NASA Technical Reports Server (NTRS)
Weigelt, G.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.; Kamperman, T. M.
1991-01-01
R136 is the luminous central object of the giant H II region 30 Doradus in the LMC. The first high-resolution observations of R136 with the Faint Object Camera on board the Hubble Space Telescope are reported. The physical nature of the brightest component R136a has been a matter of some controversy over the last few years. The UV images obtained show that R136a is a very compact star cluster consisting of more than eight stars within 0.7 arcsec diameter. From these high-resolution images a mass upper limit can be derived for the most luminous stars observed in R136.
Design and fabrication of two-dimensional semiconducting bolometer arrays for HAWC and SHARC-II
NASA Astrophysics Data System (ADS)
Voellmer, George M.; Allen, Christine A.; Amato, Michael J.; Babu, Sachidananda R.; Bartels, Arlin E.; Benford, Dominic J.; Derro, Rebecca J.; Dowell, C. D.; Harper, D. A.; Jhabvala, Murzy D.; Moseley, S. H.; Rennick, Timothy; Shirron, Peter J.; Smith, W. W.; Staguhn, Johannes G.
2003-02-01
The High resolution Airborne Wideband Camera (HAWC) and the Submillimeter High Angular Resolution Camera II (SHARC II) will use almost identical versions of an ion-implanted silicon bolometer array developed at the National Aeronautics and Space Administration's Goddard Space Flight Center (GSFC). The GSFC "Pop-Up" Detectors (PUD's) use a unique folding technique to enable a 12 × 32-element close-packed array of bolometers with a filling factor greater than 95 percent. A kinematic Kevlar suspension system isolates the 200 mK bolometers from the helium bath temperature, and GSFC - developed silicon bridge chips make electrical connection to the bolometers, while maintaining thermal isolation. The JFET preamps operate at 120 K. Providing good thermal heat sinking for these, and keeping their conduction and radiation from reaching the nearby bolometers, is one of the principal design challenges encountered. Another interesting challenge is the preparation of the silicon bolometers. They are manufactured in 32-element, planar rows using Micro Electro Mechanical Systems (MEMS) semiconductor etching techniques, and then cut and folded onto a ceramic bar. Optical alignment using specialized jigs ensures their uniformity and correct placement. The rows are then stacked to create the 12 × 32-element array. Engineering results from the first light run of SHARC II at the Caltech Submillimeter Observatory (CSO) are presented.
Near infra-red astronomy with adaptive optics and laser guide stars at the Keck Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, C.E.; Gavel, D.T.; Olivier, S.S.
1995-08-03
A laser guide star adaptive optics system is being built for the W. M. Keck Observatory`s 10-meter Keck II telescope. Two new near infra-red instruments will be used with this system: a high-resolution camera (NIRC 2) and an echelle spectrometer (NIRSPEC). The authors describe the expected capabilities of these instruments for high-resolution astronomy, using adaptive optics with either a natural star or a sodium-layer laser guide star as a reference. They compare the expected performance of these planned Keck adaptive optics instruments with that predicted for the NICMOS near infra-red camera, which is scheduled to be installed on the Hubblemore » Space Telescope in 1997.« less
Stephey, L; Wurden, G A; Schmitz, O; Frerichs, H; Effenberg, F; Biedermann, C; Harris, J; König, R; Kornejew, P; Krychowiak, M; Unterberg, E A
2016-11-01
A combined IR and visible camera system [G. A. Wurden et al., "A high resolution IR/visible imaging system for the W7-X limiter," Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and H α photon flux, and the filterscope system provided H α , H β , He-I, He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. The resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., "Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X," Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P .
NASA Astrophysics Data System (ADS)
Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.
2011-12-01
The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Stephey, L.; Wurden, G. A.; Schmitz, O.; ...
2016-08-08
A combined IR and visible camera system [G. A. Wurden et al., “A high resolution IR/visible imaging system for the W7-X limiter,” Rev. Sci. Instrum. (these proceedings)] and a filterscope system [R. J. Colchin et al., Rev. Sci. Instrum. 74, 2068 (2003)] were implemented together to obtain spectroscopic data of limiter and first wall recycling and impurity sources during Wendelstein 7-X startup plasmas. Both systems together provided excellent temporal and spatial spectroscopic resolution of limiter 3. Narrowband interference filters in front of the camera yielded C-III and Hα photon flux, and the filterscope system provided H α, H β, He-I,more » He-II, C-II, and visible bremsstrahlung data. The filterscopes made additional measurements of several points on the W7-X vacuum vessel to yield wall recycling fluxes. Finally, the resulting photon flux from both the visible camera and filterscopes can then be compared to an EMC3-EIRENE synthetic diagnostic [H. Frerichs et al., “Synthetic plasma edge diagnostics for EMC3-EIRENE, highlighted for Wendelstein 7-X,” Rev. Sci. Instrum. (these proceedings)] to infer both a limiter particle flux and wall particle flux, both of which will ultimately be used to infer the complete particle balance and particle confinement time τ P.« less
Development of a high spatial resolution neutron imaging system and performance evaluation
NASA Astrophysics Data System (ADS)
Cao, Lei
The combination of a scintillation screen and a charged coupled device (CCD) camera is a digitized neutron imaging technology that has been widely employed for research and industry application. The maximum of spatial resolution of scintillation screens is in the range of 100 mum and creates a bottleneck for the further improvement of the overall system resolution. In this investigation, a neutron sensitive micro-channel plate (MCP) detector with pore pitch of 11.4 mum is combined with a cooled CCD camera with a pixel size of 6.8 mum to provide a high spatial resolution neutron imaging system. The optical path includes a high reflection front surface mirror for keeping the camera out of neutron beam and a macro lens for achieving the maximum magnification that could be achieved. All components are assembled into an aluminum light tight box with heavy radiation shielding to protect the camera as well as to provide a dark working condition. Particularly, a remote controlled stepper motor is also integrated into the system to provide on-line focusing ability. The best focus is guaranteed through use of an algorithm instead of perceptual observation. An evaluation routine not previously utilized in the field of neutron radiography is developed in this study. Routines like this were never previously required due to the lower resolution of other systems. Use of the augulation technique to obtain presampled MTF addresses the problem of aliasing associated with digital sampling. The determined MTF agrees well with the visual inspection of imaging a testing target. Other detector/camera combinations may be integrated into the system and their performances are also compared. The best resolution achieved by the system at the TRIGA Mark II reactor at the University of Texas at Austin is 16.2 lp/mm, which is equivalent to a minimum resolvable spacing of 30 mum. The noise performance of the device is evaluated in terms of the noise power spectrum (NPS) and the detective quantum efficiency (DQE) is calculated with above determined MTF and NPS.
NASA Astrophysics Data System (ADS)
Kluttz, K. A.; Gray, R. O.
2003-12-01
We have designed and constructed an economical medium-resolution spectrograph to be used on the 32-inch telescope of Appalachian State University's Dark Sky Observatory (DSO). The primary function of this instrument will be to study shell and emission-line stars. However, we will also use this instrument for chemical abundance studies and radial velocities. The basic design is that of an Ebert spectrograph with a single 6-inch mirror acting as both the collimator and camera. The primary dispersion is accomplished by a reflection grating, and order separation is accomplished by a grism. The spectrograph has been designed so that three wavelength regions are simultaneously imaged on the CCD camera. When the Hα line is centered in the third order, Hβ and lines of Fe II multiplet 42 -- often enhanced in shell and emission-line stars -- appear in the fourth order and the fifth order contains both the Ca II K & H lines. To facilitate abundance measurements, a telluric-free region near 6400Å is available in the third order by tilting the main diffraction grating. Preliminary tests have shown that the resolution of the new spectrograph is 0.42Å in the third order (R ≈ 15,000). This relatively high resolution will allow studies to be conducted at DSO which have not previously been possible with the instrumentation currently in use. Several optical components for this spectrograph were purchased with grants from the Fund for Astrophysical Research and the University Research Council.
Development of infrared scene projectors for testing fire-fighter cameras
NASA Astrophysics Data System (ADS)
Neira, Jorge E.; Rice, Joseph P.; Amon, Francine K.
2008-04-01
We have developed two types of infrared scene projectors for hardware-in-the-loop testing of thermal imaging cameras such as those used by fire-fighters. In one, direct projection, images are projected directly into the camera. In the other, indirect projection, images are projected onto a diffuse screen, which is then viewed by the camera. Both projectors use a digital micromirror array as the spatial light modulator, in the form of a Micromirror Array Projection System (MAPS) engine having resolution of 800 x 600 with mirrors on a 17 micrometer pitch, aluminum-coated mirrors, and a ZnSe protective window. Fire-fighter cameras are often based upon uncooled microbolometer arrays and typically have resolutions of 320 x 240 or lower. For direct projection, we use an argon-arc source, which provides spectral radiance equivalent to a 10,000 Kelvin blackbody over the 7 micrometer to 14 micrometer wavelength range, to illuminate the micromirror array. For indirect projection, an expanded 4 watt CO II laser beam at a wavelength of 10.6 micrometers illuminates the micromirror array and the scene formed by the first-order diffracted light from the array is projected onto a diffuse aluminum screen. In both projectors, a well-calibrated reference camera is used to provide non-uniformity correction and brightness calibration of the projected scenes, and the fire-fighter cameras alternately view the same scenes. In this paper, we compare the two methods for this application and report on our quantitative results. Indirect projection has an advantage of being able to more easily fill the wide field of view of the fire-fighter cameras, which typically is about 50 degrees. Direct projection more efficiently utilizes the available light, which will become important in emerging multispectral and hyperspectral applications.
Yi, Shengzhen; Zhang, Zhe; Huang, Qiushi; Zhang, Zhong; Mu, Baozhong; Wang, Zhanshan; Fang, Zhiheng; Wang, Wei; Fu, Sizu
2016-10-01
Because grazing-incidence Kirkpatrick-Baez (KB) microscopes have better resolution and collection efficiency than pinhole cameras, they have been widely used for x-ray imaging diagnostics of laser inertial confinement fusion. The assembly and adjustment of a multichannel KB microscope must meet stringent requirements for image resolution and reproducible alignment. In the present study, an eight-channel KB microscope was developed for diagnostics by imaging self-emission x-rays with a framing camera at the Shenguang-II Update (SGII-Update) laser facility. A consistent object field of view is ensured in the eight channels using an assembly method based on conical reference cones, which also allow the intervals between the eight images to be tuned to couple with the microstrips of the x-ray framing camera. The eight-channel KB microscope was adjusted via real-time x-ray imaging experiments in the laboratory. This paper describes the details of the eight-channel KB microscope, its optical and multilayer design, the assembly and alignment methods, and results of imaging in the laboratory and at the SGII-Update.
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
The Visible Imaging System (VIS) for the Polar Spacecraft
NASA Technical Reports Server (NTRS)
Frank, L. A.; Sigwarth, J. B.; Craven, J. D.; Cravens, J. P.; Dolan, J. S.; Dvorsky, M. R.; Hardebeck, P. K.; Harvey, J. D.; Muller, D. W.
1995-01-01
The Visible Imaging System (VIS) is a set of three low-light-level cameras to be flown on the POLAR spacecraft of the Global Geospace Science (GGS) program which is an element of the International Solar-Terrestrial Physics (ISTP) campaign. Two of these cameras share primary and some secondary optics and are designed to provide images of the nighttime auroral oval at visible wavelengths. A third camera is used to monitor the directions of the fields-of-view of these sensitive auroral cameras with respect to sunlit Earth. The auroral emissions of interest include those from N+2 at 391.4 nm, 0 I at 557.7 and 630.0 nm, H I at 656.3 nm, and 0 II at 732.0 nm. The two auroral cameras have different spatial resolutions. These resolutions are about 10 and 20 km from a spacecraft altitude of 8 R(sub e). The time to acquire and telemeter a 256 x 256-pixel image is about 12 s. The primary scientific objectives of this imaging instrumentation, together with the in-situ observations from the ensemble of ISTP spacecraft, are (1) quantitative assessment of the dissipation of magnetospheric energy into the auroral ionosphere, (2) an instantaneous reference system for the in-situ measurements, (3) development of a substantial model for energy flow within the magnetosphere, (4) investigation of the topology of the magnetosphere, and (5) delineation of the responses of the magnetosphere to substorms and variable solar wind conditions.
Effect of camera resolution and bandwidth on facial affect recognition.
Cruz, Mario; Cruz, Robyn Flaum; Krupinski, Elizabeth A; Lopez, Ana Maria; McNeeley, Richard M; Weinstein, Ronald S
2004-01-01
This preliminary study explored the effect of camera resolution and bandwidth on facial affect recognition, an important process and clinical variable in mental health service delivery. Sixty medical students and mental health-care professionals were recruited and randomized to four different combinations of commonly used teleconferencing camera resolutions and bandwidths: (1) one chip charged coupling device (CCD) camera, commonly used for VHSgrade taping and in teleconferencing systems costing less than $4,000 with a resolution of 280 lines, and 128 kilobytes per second bandwidth (kbps); (2) VHS and 768 kbps; (3) three-chip CCD camera, commonly used for Betacam (Beta) grade taping and in teleconferencing systems costing more than $4,000 with a resolution of 480 lines, and 128 kbps; and (4) Betacam and 768 kbps. The subjects were asked to identify four facial affects dynamically presented on videotape by an actor and actress presented via a video monitor at 30 frames per second. Two-way analysis of variance (ANOVA) revealed a significant interaction effect for camera resolution and bandwidth (p = 0.02) and a significant main effect for camera resolution (p = 0.006), but no main effect for bandwidth was detected. Post hoc testing of interaction means, using the Tukey Honestly Significant Difference (HSD) test and the critical difference (CD) at the 0.05 alpha level = 1.71, revealed subjects in the VHS/768 kbps (M = 7.133) and VHS/128 kbps (M = 6.533) were significantly better at recognizing the displayed facial affects than those in the Betacam/768 kbps (M = 4.733) or Betacam/128 kbps (M = 6.333) conditions. Camera resolution and bandwidth combinations differ in their capacity to influence facial affect recognition. For service providers, this study's results support the use of VHS cameras with either 768 kbps or 128 kbps bandwidths for facial affect recognition compared to Betacam cameras. The authors argue that the results of this study are a consequence of the VHS camera resolution/bandwidth combinations' ability to improve signal detection (i.e., facial affect recognition) by subjects in comparison to Betacam camera resolution/bandwidth combinations.
Høye, Gudrun; Fridman, Andrei
2013-05-06
Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.
DOT National Transportation Integrated Search
2015-08-01
Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...
Advanced Camera for Surveys Instrument Handbook for Cycle 25 v. 16.0
NASA Astrophysics Data System (ADS)
Avila, R. J.
2017-01-01
The Advanced Camera for Surveys (ACS), a third-generation instrument, was installed in the Hubble Space Telescope during Servicing Mission 3B, on March 7, 2002. Its primary purpose was to increase HST imaging discovery efficiency by about a factor of 10, with a combination of detector area and quantum efficiency that surpasses previous instruments. ACS has three independent cameras that have provided wide-field, high resolution, and ultraviolet imaging capabilities respectively, using a broad assortment of filters designed to address a large range of scientific goals. In addition, coronagraphic, polarimetric, and grism capabilities have made the ACS a versatile and powerful instrument. The ACS Instrument Handbook, which is maintained by the ACS Team at STScI, descr ibes the instrument properties, performance, operations, and calibration. It is the basic technical reference manual for the instrument, and should be used with other documents (listed in Table 1.1) for writing Phase I proposals, detailed Phase II programs, and for data analysis. (See Figure 1.1). In May 2009, Servicing Mission 4 (SM4) successfully restored the ACS Wide Field Camera (WFC) to regular service after its failure in January 2007. Unfortunately, the ACS High Resolution Camera (HRC) was not restored to operation during SM4, so it cannot be proposed for new observations. Nevertheless, this handbook retains description of the HRC to support analysis of archived observations. The ACS Solar Blind Channel (SBC) was unaffected by the January 2007 failure of WFC and HRC. The SBC has remained in steady operation, and was not serviced during SM4. It remains available for new observations.
A novel super-resolution camera model
NASA Astrophysics Data System (ADS)
Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli
2015-05-01
Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.
Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography
Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.
1972-01-01
Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
UAV-based NDVI calculation over grassland: An alternative approach
NASA Astrophysics Data System (ADS)
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a portable spectroradiometer (Spectravista SVC HR-1024i). All data were collected on a dry alpine mountain grassland site in the Matsch valley, Italy, during the vegetation period of 2015. Data acquisition for the first comparison followed a pre-programmed flight plan in which the hyperspectral and alternative dual-camera constellation were mounted separately on an octocopter-UAV during two consecutive flight campaigns. Ground spectral measurements collection took place on the same site and on the same dates (three in total) of the flight campaigns. The proposed technique achieves promising results and therewith constitutes a cheap and simple way of collecting spatially explicit information on vegetated areas even in challenging terrain.
Cao, Weidong; Bean, Brian; Corey, Scott; Coursey, Johnathan S; Hasson, Kenton C; Inoue, Hiroshi; Isano, Taisuke; Kanderian, Sami; Lane, Ben; Liang, Hongye; Murphy, Brian; Owen, Greg; Shinoda, Nobuhiko; Zeng, Shulin; Knight, Ivor T
2016-06-01
We report the development of an automated genetic analyzer for human sample testing based on microfluidic rapid polymerase chain reaction (PCR) with high-resolution melting analysis (HRMA). The integrated DNA microfluidic cartridge was used on a platform designed with a robotic pipettor system that works by sequentially picking up different test solutions from a 384-well plate, mixing them in the tips, and delivering mixed fluids to the DNA cartridge. A novel image feedback flow control system based on a Canon 5D Mark II digital camera was developed for controlling fluid movement through a complex microfluidic branching network without the use of valves. The same camera was used for measuring the high-resolution melt curve of DNA amplicons that were generated in the microfluidic chip. Owing to fast heating and cooling as well as sensitive temperature measurement in the microfluidic channels, the time frame for PCR and HRMA was dramatically reduced from hours to minutes. Preliminary testing results demonstrated that rapid serial PCR and HRMA are possible while still achieving high data quality that is suitable for human sample testing. © 2015 Society for Laboratory Automation and Screening.
Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views
2014-11-10
collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video
Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
Spatial resolution requirements for urban land cover mapping from space
NASA Technical Reports Server (NTRS)
Todd, William J.; Wrigley, Robert C.
1986-01-01
Very low resolution (VLR) satellite data (Advanced Very High Resolution Radiometer, DMSP Operational Linescan System), low resolution (LR) data (Landsat MSS), medium resolution (MR) data (Landsat TM), and high resolution (HR) satellite data (Spot HRV, Large Format Camera) were evaluated and compared for interpretability at differing spatial resolutions. VLR data (500 m - 1.0 km) is useful for Level 1 (urban/rural distinction) mapping at 1:1,000,000 scale. Feature tone/color is utilized to distinguish generalized urban land cover using LR data (80 m) for 1:250,000 scale mapping. Advancing to MR data (30 m) and 1:100,000 scale mapping, confidence in land cover mapping is greatly increased, owing to the element of texture/pattern which is now evident in the imagery. Shape and shadow contribute to detailed Level II/III urban land use mapping possible if the interpreter can use HR (10-15 m) satellite data; mapping scales can be 1:25,000 - 1:50,000.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
High-Resolution Mars Camera Test Image of Moon Infrared
2005-09-13
This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.
Microchannel plate streak camera
Wang, Ching L.
1989-01-01
An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.
Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system
NASA Astrophysics Data System (ADS)
Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2010-05-01
Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.
Far ultraviolet wide field imaging and photometry - Spartan-202 Mark II Far Ultraviolet Camera
NASA Technical Reports Server (NTRS)
Carruthers, George R.; Heckathorn, Harry M.; Opal, Chet B.; Witt, Adolf N.; Henize, Karl G.
1988-01-01
The U.S. Naval Research Laboratory' Mark II Far Ultraviolet Camera, which is expected to be a primary scientific instrument aboard the Spartan-202 Space Shuttle mission, is described. This camera is intended to obtain FUV wide-field imagery of stars and extended celestial objects, including diffuse nebulae and nearby galaxies. The observations will support the HST by providing FUV photometry of calibration objects. The Mark II camera is an electrographic Schmidt camera with an aperture of 15 cm, a focal length of 30.5 cm, and sensitivity in the 1230-1600 A wavelength range.
NASA Astrophysics Data System (ADS)
Joshi, V.; Wigdahl, J.; Nemeth, S.; Zamora, G.; Ebrahim, E.; Soliz, P.
2018-02-01
Retinal abnormalities associated with hypertensive retinopathy are useful in assessing the risk of cardiovascular disease, heart failure, and stroke. Assessing these risks as part of primary care can lead to a decrease in the incidence of cardiovascular disease-related deaths. Primary care is a resource limited setting where low cost retinal cameras may bring needed help without compromising care. We compared a low-cost handheld retinal camera to a traditional table top retinal camera on their optical characteristics and performance to detect hypertensive retinopathy. A retrospective dataset of N=40 subjects (28 with hypertensive retinopathy, 12 controls) was used from a clinical study conducted at a primary care clinic in Texas. Non-mydriatic retinal fundus images were acquired using a Pictor Plus hand held camera (Volk Optical Inc.) and a Canon CR1-Mark II tabletop camera (Canon USA) during the same encounter. The images from each camera were graded by a licensed optometrist according to the universally accepted Keith-Wagener-Barker Hypertensive Retinopathy Classification System, three weeks apart to minimize memory bias. The sensitivity of the hand-held camera to detect any level of hypertensive retinopathy was 86% compared to the Canon. Insufficient photographer's skills produced 70% of the false negative cases. The other 30% were due to the handheld camera's insufficient spatial resolution to resolve the vascular changes such as minor A/V nicking and copper wiring, but these were associated with non-referable disease. Physician evaluation of the performance of the handheld camera indicates it is sufficient to provide high risk patients with adequate follow up and management.
An Example-Based Super-Resolution Algorithm for Selfie Images
William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep
2016-01-01
A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500
Image enhancement in positron emission mammography
NASA Astrophysics Data System (ADS)
Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.
2017-02-01
Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).
2003-03-07
File name :DSC_0749.JPG File size :1.1MB(1174690Bytes) Date taken :2003/03/07 13:51:29 Image size :2000 x 1312 Resolution :300 x 300 dpi Number of bits :8bit/channel Protection attribute :Off Hide Attribute :Off Camera ID :N/A Camera :NIKON D1H Quality mode :FINE Metering mode :Matrix Exposure mode :Shutter priority Speed light :No Focal length :20 mm Shutter speed :1/500second Aperture :F11.0 Exposure compensation :0 EV White Balance :Auto Lens :20 mm F 2.8 Flash sync mode :N/A Exposure difference :0.0 EV Flexible program :No Sensitivity :ISO200 Sharpening :Normal Image Type :Color Color Mode :Mode II(Adobe RGB) Hue adjustment :3 Saturation Control :N/A Tone compensation :Normal Latitude(GPS) :N/A Longitude(GPS) :N/A Altitude(GPS) :N/A
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.
1981-02-01
pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l
Assessment of Non-Invasive Methods of Measuring Bone Repair in Naval Casualty Victims.
1980-02-01
the pin-hole colurnator on a high resolution gamma camera. In addition, the pilot project was necessary to train new technicians who had been hired...after the loss of trained technicians in 1976. Finally, the project was under- taken to help define any problems with regards to radiation contamination...allograft was entirely normal and equivalent to its autogenous control except that its histologic rating was a Type II repair rather than a Type I. In
Mars Exploration Rover engineering cameras
Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.
2003-01-01
NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing
NASA Astrophysics Data System (ADS)
Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.
2018-01-01
Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.
Popovic, Kosta; McKisson, Jack E.; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G.; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B.
2017-01-01
This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide (LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures including determination of the location and extent of primary carcinomas, detection of secondary lesions and sentinel lymph node biopsy (SLNB). Here we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1 % FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively. PMID:28286345
Research on a solid state-streak camera based on an electro-optic crystal
NASA Astrophysics Data System (ADS)
Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang
2006-06-01
With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
Performance of the Tachyon Time-of-Flight PET Camera
NASA Astrophysics Data System (ADS)
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Performance of the Tachyon Time-of-Flight PET Camera.
Peng, Q; Choong, W-S; Vu, C; Huber, J S; Janecek, M; Wilson, D; Huesman, R H; Qi, Jinyi; Zhou, Jian; Moses, W W
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 × 25 mm 2 side of 6.15 × 6.15 × 25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-01-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W. -S.; Vu, C.; ...
2015-01-23
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm 2 side of 6.15 ×6.15 ×25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according tomore » the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less
Super-Resolution in Plenoptic Cameras Using FPGAs
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-01-01
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246
Super-resolution in plenoptic cameras using FPGAs.
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-05-16
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.
Improved spatial resolution of luminescence images acquired with a silicon line scanning camera
NASA Astrophysics Data System (ADS)
Teal, Anthony; Mitchell, Bernhard; Juhl, Mattias K.
2018-04-01
Luminescence imaging is currently being used to provide spatially resolved defect in high volume silicon solar cell production. One option to obtain the high throughput required for on the fly detection is the use a silicon line scan cameras. However, when using a silicon based camera, the spatial resolution is reduced as a result of the weakly absorbed light scattering within the camera's chip. This paper address this issue by applying deconvolution from a measured point spread function. This paper extends the methods for determining the point spread function of a silicon area camera to a line scan camera with charge transfer. The improvement in resolution is quantified in the Fourier domain and in spatial domain on an image of a multicrystalline silicon brick. It is found that light spreading beyond the active sensor area is significant in line scan sensors, but can be corrected for through normalization of the point spread function. The application of this method improves the raw data, allowing effective detection of the spatial resolution of defects in manufacturing.
Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products
NASA Astrophysics Data System (ADS)
Williams, Don; Burns, Peter D.
2007-01-01
There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.
Optical analysis of a compound quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Wall, S. D.; Burcher, E. E.; Huck, F. O.
1974-01-01
A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.
Super-resolved refocusing with a plenoptic camera
NASA Astrophysics Data System (ADS)
Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu
2011-03-01
This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).
Toward an image compression algorithm for the high-resolution electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.
NASA Technical Reports Server (NTRS)
Clegg, R. H.; Scherz, J. P.
1975-01-01
Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
Quasi-microscope concept for planetary missions.
Huck, F O; Arvidson, R E; Burcher, E E; Giat, O; Wall, S D
1977-09-01
Viking lander cameras have returned stereo and multispectral views of the Martian surface with a resolution that approaches 2 mm/lp in the near field. A two-orders-of-magnitude increase in resolution could be obtained for collected surface samples by augmenting these cameras with auxiliary optics that would neither impose special camera design requirements nor limit the cameras field of view of the terrain. Quasi-microscope images would provide valuable data on the physical and chemical characteristics of planetary regoliths.
An Automatic Image-Based Modelling Method Applied to Forensic Infography
Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David
2015-01-01
This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628
[NEII] Line Velocity Structure of Ultracompact HII Regions
NASA Astrophysics Data System (ADS)
Okamoto, Yoshiko K.; Kataza, Hirokazu; Yamashita, Takuya; Miyata, Takashi; Sako, Shigeyuki; Honda, Mitsuhiko; Onaka, Takashi; Fujiyoshi, Takuya
Newly formed massive stars are embedded in their natal molecular clouds and are observed as ultracompact HII regions. They emit strong ionic lines such as [NeII] 12.8 micron. Since Ne is ionized by UV photons of E>21.6eV which is higher than the ionization energy of hydrogen atoms the line probes the ionized gas near the ionizing stars. This enables to probe gas motion in the vicinity of recently-formed massive stars. High angular and spectral resolution observations of the [NeII] line will thus provide siginificant information on structures (e.g. disks and outflows) generated through massive star formation. We made [NeII] spectroscopy of ultracompact HII regions using the Cooled Mid-Infrared Camera and Spectrometer (COMICS) on the 8.2m Subaru Telescope in July 2002. Spatial and spectral resolutions were 0.5"" and 10000 respectively. Among the targets G45.12+0.13 shows the largest spatial variation in velocity. The brightest area of G45.12+0.13 has the largest line width in the object. The total velocity deviation amounts to 50km/s (peak to peak value) in the observed area. We report the velocity structure of [NeII] emission of G45.12+0.13 and discuss the gas motion near the ionizing star.
A high-resolution image of the inner shell of the P Cygni nebula in the infrared [Fe II] line
NASA Astrophysics Data System (ADS)
Arcidiacono, C.; Ragazzoni, R.; Morossi, C.; Franchini, M.; di Marcantonio, P.; Kulesa, C.; McCarthy, D.; Briguglio, R.; Xompero, M.; Busoni, L.; Quirós-Pacheco, F.; Pinna, E.; Boutsia, K.; Paris, D.
2014-09-01
Using the adaptive optics system of the Large Binocular Telescope, we have obtained near-infrared camera PISCES images of the inner shell of the nebula around the luminous blue variable star P Cygni in the [Fe II] emission line at 1.6435 μm. We have combined the images in order to cover a field of view of about 20 arcsec around P Cygni, thus providing the high-resolution (0.08 arcsec) two-dimensional spatial distribution of the inner shell of the P Cygni nebula in [Fe II]. We have identified several nebular emission regions that are characterized by a signal-to-noise ratio > 3. A comparison of our results with those available in the literature shows full consistency with the findings of Smith & Hartigan, which are based on radial velocity measurements, and relatively good agreement with the extension of emission nebula in [N II] λ6584 found by Barlow et al. We have clearly also detected extended emission inside the radial distance R = 7.8 arcsec and outside R = 9.7 arcsec, which are the nebular boundaries proposed by Smith & Hartigan. New complementary spectroscopic observations are planned in order to measure radial velocities and to derive the three-dimensional distribution of the P Cygni nebula.
Single lens 3D-camera with extended depth-of-field
NASA Astrophysics Data System (ADS)
Perwaß, Christian; Wietzke, Lennart
2012-03-01
Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
On the resolution of plenoptic PIV
NASA Astrophysics Data System (ADS)
Deem, Eric A.; Zhang, Yang; Cattafesta, Louis N.; Fahringer, Timothy W.; Thurow, Brian S.
2016-08-01
Plenoptic PIV offers a simple, single camera solution for volumetric velocity measurements of fluid flow. However, due to the novel manner in which the particle images are acquired and processed, few references exist to aid in determining the resolution limits of the measurements. This manuscript provides a framework for determining the spatial resolution of plenoptic PIV based on camera design and experimental parameters. This information can then be used to determine the smallest length scales of flows that are observable by plenoptic PIV, the dynamic range of plenoptic PIV, and the corresponding uncertainty in plenoptic PIV measurements. A simplified plenoptic camera is illustrated to provide the reader with a working knowledge of the method in which the light field is recorded. Then, operational considerations are addressed. This includes a derivation of the depth resolution in terms of the design parameters of the camera. Simulated volume reconstructions are presented to validate the derived limits. It is found that, while determining the lateral resolution is relatively straightforward, many factors affect the resolution along the optical axis. These factors are addressed and suggestions are proposed for improving performance.
Camera Installation on a Beach AT-11
1950-02-21
Researchers at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory conducted an extensive investigation into the composition of clouds and their effect on aircraft icing. The researcher in this photograph is installing cameras on a Beach AT-11 Kansan in order to photograph water droplets during flights through clouds. The twin engine AT-11 was the primary training aircraft for World War II bomber crews. The NACA acquired this aircraft in January 1946, shortly after the end of the war. The NACA Lewis’ icing research during the war focused on the resolution of icing problems for specific military aircraft. In 1947 the laboratory broadened its program and began systematically measuring and categorizing clouds and water droplets. The three main thrusts of the Lewis icing flight research were the development of better instrumentation, the accumulation of data on ice buildup during flight, and the measurement of droplet sizes in clouds. The NACA researchers developed several types of measurement devices for the icing flights, including modified cameras. The National Research Council of Canada experimented with high-speed cameras with a large magnification lens to photograph the droplets suspended in the air. In 1951 NACA Lewis developed and flight tested their own camera with a magnification of 32. The camera, mounted to an external strut, could be used every five seconds as the aircraft reached speeds up to 150 miles per hour. The initial flight tests through cumulus clouds demonstrated that droplet size distribution could be studied.
High-Resolution Mars Camera Test Image of Moon (Infrared)
NASA Technical Reports Server (NTRS)
2005-01-01
This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.Potential for application of an acoustic camera in particle tracking velocimetry.
Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim
2008-11-01
We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
Restoring the spatial resolution of refocus images on 4D light field
NASA Astrophysics Data System (ADS)
Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok
2010-01-01
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.
The spatial resolution of a rotating gamma camera tomographic facility.
Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R
1983-12-01
An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.
Prosdocimi, Massimo; Burguet, Maria; Di Prima, Simone; Sofia, Giulia; Terol, Enric; Rodrigo Comino, Jesús; Cerdà, Artemi; Tarolli, Paolo
2017-01-01
Soil water erosion is a serious problem, especially in agricultural lands. Among these, vineyards deserve attention, because they constitute for the Mediterranean areas a type of land use affected by high soil losses. A significant problem related to the study of soil water erosion in these areas consists in the lack of a standardized procedure of collecting data and reporting results, mainly due to a variability among the measurement methods applied. Given this issue and the seriousness of soil water erosion in Mediterranean vineyards, this works aims to quantify the soil losses caused by simulated rainstorms, and compare them with each other depending on two different methodologies: (i) rainfall simulation and (ii) surface elevation change-based, relying on high-resolution Digital Elevation Models (DEMs) derived from a photogrammetric technique (Structure-from-Motion or SfM). The experiments were carried out in a typical Mediterranean vineyard, located in eastern Spain, at very fine scales. SfM data were obtained from one reflex camera and a smartphone built-in camera. An index of sediment connectivity was also applied to evaluate the potential effect of connectivity within the plots. DEMs derived from the smartphone and the reflex camera were comparable with each other in terms of accuracy and capability of estimating soil loss. Furthermore, soil loss estimated with the surface elevation change-based method resulted to be of the same order of magnitude of that one obtained with rainfall simulation, as long as the sediment connectivity within the plot was considered. High-resolution topography derived from SfM revealed to be essential in the sediment connectivity analysis and, therefore, in the estimation of eroded materials, when comparing them to those derived from the rainfall simulation methodology. The fact that smartphones built-in cameras could produce as much satisfying results as those derived from reflex cameras is a high value added for using SfM. Copyright © 2016 Elsevier B.V. All rights reserved.
Camera system resolution and its influence on digital image correlation
Reu, Phillip L.; Sweatt, William; Miller, Timothy; ...
2014-09-21
Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less
Design of the high-resolution soft X-ray imaging system on the Joint Texas Experimental Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jianchao; Ding, Yonghua, E-mail: yhding@mail.hust.edu.cn; Zhang, Xiaoqing
2014-11-15
A new soft X-ray diagnostic system has been designed on the Joint Texas Experimental Tokamak (J-TEXT) aiming to observe and survey the magnetohydrodynamic (MHD) activities. The system consists of five cameras located at the same toroidal position. Each camera has 16 photodiode elements. Three imaging cameras view the internal plasma region (r/a < 0.7) with a spatial resolution about 2 cm. By tomographic method, heat transport outside from the 1/1 mode X-point during the sawtooth collapse is found. The other two cameras with a higher spatial resolution 1 cm are designed for monitoring local MHD activities respectively in plasma coremore » and boundary.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel Carisle; Griffin, John Clark
2015-01-01
The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less
Seeing tobacco mosaic virus through direct electron detectors
Fromm, Simon A.; Bharat, Tanmay A.M.; Jakobi, Arjen J.; Hagen, Wim J.H.; Sachse, Carsten
2015-01-01
With the introduction of direct electron detectors (DED) to the field of electron cryo-microscopy, a wave of atomic-resolution structures has become available. As the new detectors still require comparative characterization, we have used tobacco mosaic virus (TMV) as a test specimen to study the quality of 3D image reconstructions from data recorded on the two direct electron detector cameras, K2 Summit and Falcon II. Using DED movie frames, we explored related image-processing aspects and compared the performance of micrograph-based and segment-based motion correction approaches. In addition, we investigated the effect of dose deposition on the atomic-resolution structure of TMV and show that radiation damage affects negative carboxyl chains first in a side-chain specific manner. Finally, using 450,000 asymmetric units and limiting the effects of radiation damage, we determined a high-resolution cryo-EM map at 3.35 Å resolution. Here, we provide a comparative case study of highly ordered TMV recorded on different direct electron detectors to establish recording and processing conditions that enable structure determination up to 3.2 Å in resolution using cryo-EM. PMID:25528571
Image quality assessment for selfies with and without super resolution
NASA Astrophysics Data System (ADS)
Kubota, Aya; Gohshi, Seiichi
2018-04-01
With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
NASA Technical Reports Server (NTRS)
Tarbell, Theodore D.
1993-01-01
Technical studies of the feasibility of balloon flights of the former Spacelab instrument, the Solar Optical Universal Polarimeter, with a modern charge-coupled device (CCD) camera, to study the structure and evolution of solar active regions at high resolution, are reviewed. In particular, different CCD cameras were used at ground-based solar observatories with the SOUP filter, to evaluate their performance and collect high resolution images. High resolution movies of the photosphere and chromosphere were successfully obtained using four different CCD cameras. Some of this data was collected in coordinated observations with the Yohkoh satellite during May-July, 1992, and they are being analyzed scientifically along with simultaneous X-ray observations.
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
Mars Global Coverage by Context Camera on MRO
2017-03-29
In early 2017, after more than a decade of observing Mars, the Context Camera (CTX) on NASA's Mars Reconnaissance Orbiter (MRO) surpassed 99 percent coverage of the entire planet. This mosaic shows that global coverage. No other camera has ever imaged so much of Mars in such high resolution. The mosaic offers a resolution that enables zooming in for more detail of any region of Mars. It is still far from the full resolution of individual CTX observations, which can reveal the shapes of features smaller than the size of a tennis court. As of March 2017, the Context Camera has taken about 90,000 images since the spacecraft began examining Mars from orbit in late 2006. In addition to covering 99.1 percent of the surface of Mars at least once, this camera has observed more than 60 percent of Mars more than once, checking for changes over time and providing stereo pairs for 3-D modeling of the surface. http://photojournal.jpl.nasa.gov/catalog/PIA21488
High-resolution ophthalmic imaging system
Olivier, Scot S.; Carrano, Carmen J.
2007-12-04
A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...
2017-11-07
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Development of a Compton camera for safeguards applications in a pyroprocessing facility
NASA Astrophysics Data System (ADS)
Park, Jin Hyung; Kim, Young Su; Kim, Chan Hyeong; Seo, Hee; Park, Se-Hwan; Kim, Ho-Dong
2014-11-01
The Compton camera has a potential to be used for localizing nuclear materials in a large pyroprocessing facility due to its unique Compton kinematics-based electronic collimation method. Our R&D group, KAERI, and Hanyang University have made an effort to develop a scintillation-detector-based large-area Compton camera for safeguards applications. In the present study, a series of Monte Carlo simulations was performed with Geant4 in order to examine the effect of the detector parameters and the feasibility of using a Compton camera to obtain an image of the nuclear material distribution. Based on the simulation study, experimental studies were performed to assess the possibility of Compton imaging in accordance with the type of the crystal. Two different types of Compton cameras were fabricated and tested with a pixelated type of LYSO (Ce) and a monolithic type of NaI(Tl). The conclusions of this study as a design rule for a large-area Compton camera can be summarized as follows: 1) The energy resolution, rather than position resolution, of the component detector was the limiting factor for the imaging resolution, 2) the Compton imaging system needs to be placed as close as possible to the source location, and 3) both pixelated and monolithic types of crystals can be utilized; however, the monolithic types, require a stochastic-method-based position-estimating algorithm for improving the position resolution.
Time-of-Flight Microwave Camera.
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-05
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
NASA Technical Reports Server (NTRS)
Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob
2001-01-01
To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.
Commissioning and Characterization of a Dedicated High-Resolution Breast PET Camera
2014-02-01
aim to achieve 1 mm3 resolution using a unique detector design that is able to measure annihilation radiation coming from the PET tracer in 3...undergoing a regular staging PET /CT. We will image with the novel two-panel system after the standard PET /CT scan , in order not to interfere with the...Resolution Breast PET Camera PRINCIPAL INVESTIGATOR: Arne Vandenbroucke, Ph.D. CONTRACTING ORGANIZATION: Stanford University
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
Low-cost camera modifications and methodologies for very-high-resolution digital images
USDA-ARS?s Scientific Manuscript database
Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...
Solid state replacement of rotating mirror cameras
NASA Astrophysics Data System (ADS)
Frank, Alan M.; Bartolick, Joseph M.
2007-01-01
Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.
Surveillance of a 2D Plane Area with 3D Deployed Cameras
Fu, Yi-Ge; Zhou, Jie; Deng, Lei
2014-01-01
As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353
Innovative Camera and Image Processing System to Characterize Cryospheric Changes
NASA Astrophysics Data System (ADS)
Schenk, A.; Csatho, B. M.; Nagarajan, S.
2010-12-01
The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.
Calibration of Action Cameras for Photogrammetric Purposes
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-01-01
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898
Calibration of action cameras for photogrammetric purposes.
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-09-18
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.
Development of an all-in-one gamma camera/CCD system for safeguard verification
NASA Astrophysics Data System (ADS)
Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo
2014-12-01
For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.
Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications
NASA Astrophysics Data System (ADS)
Olson, Gaylord G.; Walker, Jo N.
1997-09-01
Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.
A new omni-directional multi-camera system for high resolution surveillance
NASA Astrophysics Data System (ADS)
Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2014-05-01
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
NASA Astrophysics Data System (ADS)
Salach, A.; Markiewicza, J. S.; Zawieska, D.
2016-06-01
An orthoimage is one of the basic photogrammetric products used for architectural documentation of historical objects; recently, it has become a standard in such work. Considering the increasing popularity of photogrammetric techniques applied in the cultural heritage domain, this research examines the two most popular measuring technologies: terrestrial laser scanning, and automatic processing of digital photographs. The basic objective of the performed works presented in this paper was to optimize the quality of generated high-resolution orthoimages using integration of data acquired by a Z+F 5006 terrestrial laser scanner and a Canon EOS 5D Mark II digital camera. The subject was one of the walls of the "Blue Chamber" of the Museum of King Jan III's Palace at Wilanów (Warsaw, Poland). The high-resolution images resulting from integration of the point clouds acquired by the different methods were analysed in detail with respect to geometric and radiometric correctness.
Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.
2012-01-01
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394
Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P
2012-06-05
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.
NASA Astrophysics Data System (ADS)
Gonzaga, S.; et al.
2011-03-01
ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
Report Of The HST Strategy Panel: A Strategy For Recovery
1991-01-01
orbit change out: the Wide Field/Planetary Camera II (WFPC II), the Near-Infrared Camera and Multi- Object Spectrometer (NICMOS) and the Space ...are the Space Telescope Imaging Spectrograph (STB), the Near-Infrared Camera and Multi- Object Spectrom- eter (NICMOS), and the second Wide Field and...expected to fail to lock due to duplicity was 20%; on- orbit data indicates that 10% may be a better estimate, but the guide stars were preselected
Performance evaluation of a two detector camera for real-time video.
Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo
2016-12-20
Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
90 GHz and 150 GHz Observations of the Orion M42 Region. A Submillimeter to Radio Analysis
NASA Technical Reports Server (NTRS)
Dicker, S. R.; Mason, B. S.; Korngut, P. M.; Cotton, W. D.; Compiegne, M.; Devlin, M. J.; Martin, P. G.; Ade, P. A. R; Benford, D. J.; Irwin, K. D.;
2009-01-01
We have used the new 90GHz MUSTANG camera on the Robert C. Green Bank Telescope (GBT)to map the bright Huygens region of the star-forming region M42 with a resolution of 9" and a sensitivity of 2.8 mJy/beam. Ninety GHz is an interesting transition frequency, as MUSTANG detects both the free-free emission characteristic of the H II region created by the Trapezium stars, normally seen at lower frequencies, and thermal dust emission from the background OMCI molecular cloud, normally mapped at higher frequencies. We also present similar data from the 150 GHz GISMO camera taken on the IRAM 30 m telescope. This map has 15" resolution. By combining the MUSTANG data with 1.4, 8. and 31 GHz radio data from the VLA and GBT, we derive a new estimate of the emission measure averaged electron temperature of T(sub e) = 11376+/-1050 K by an original method relating free-free emission intensities at optically thin and optically thick frequencies. Combining Infrared Space Observatory-long wavelength spectrometer (ISO-LWS) data with our data, we derive a new estimate of the dust temperature and spectral emissivity index within the 80" ISO-LWS beam toward Orion KL/BN, T(sub d) = 42+/-3 K and Beta(sub d) = 1.3+/-0.1. We show that both T(sub d) and Beta(sub d) decrease when going from the H II region and excited OMCI interface to the denser UV shielded part OMCI (Orion KL/BN, Orion S). With a model consisting of only free-free and thermal dust emission, we are able to fit data taken at frequencies from 1.5 GHz to 854 GHz (350 micrometers).
Sub-picosecond streak camera measurements at LLNL: From IR to x-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuba, J; Shepherd, R; Booth, R
An ultra fast, sub-picosecond resolution streak camera has been recently developed at the LLNL. The camera is a versatile instrument with a wide operating wavelength range. The temporal resolution of up to 300 fs can be achieved, with routine operation at 500 fs. The streak camera has been operated in a wide wavelength range from IR to x-rays up to 2 keV. In this paper we briefly review the main design features that result in the unique properties of the streak camera and present its several scientific applications: (1) Streak camera characterization using a Michelson interferometer in visible range, (2)more » temporally resolved study of a transient x-ray laser at 14.7 nm, which enabled us to vary the x-ray laser pulse duration from {approx}2-6 ps by changing the pump laser parameters, and (3) an example of a time-resolved spectroscopy experiment with the streak camera.« less
Minimum Requirements for Taxicab Security Cameras.
Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene
2014-07-01
The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.
NASA Astrophysics Data System (ADS)
Michaelis, Dirk; Schroeder, Andreas
2012-11-01
Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.
Full-Frame Reference for Test Photo of Moon
NASA Technical Reports Server (NTRS)
2005-01-01
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images. Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across. The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.Science Applications of the RULLI Camera: Photon Thrust, General Relativity and the Crab Nebula
NASA Astrophysics Data System (ADS)
Currie, D.; Thompson, D.; Buck, S.; Des Georges, R.; Ho, C.; Remelius, D.; Shirey, B.; Gabriele, T.; Gamiz, V.; Ulibarri, L.; Hallada, M.; Szymanski, P.
RULLI (Remote Ultra-Low Light Imager) is a unique single photon imager with very high (microsecond) time resolution and continuous sensitivity, developed at Los Alamos National Laboratory. This technology allows a family of astrophysical and satellite observations that were not feasible in the past. We will describe the results of the analysis of recent observations of the LAGEOS II satellite and the opportunities expected for future observations of the Crab nebula. The LAGEOS/LARES experiments have measured the dynamical General Relativistic effects of the rotation of the earth, the Lense-Thirring effect. The major error source is photon thrust and a required knowledge of the orientation of the spin axis of LAGEOS. This information is required for the analysis of the observations to date, and for future observations to obtain more accurate measurements of the Lense-Thirring effect, of deviations from the inverse square law, and of other General Relativistic effects. The rotation of LAGEOS I is already too slow for traditional measurement methods and Lageos II will soon suffer a similar fate. The RULLI camera can provide new information and an extension of the lifetime for these measurements. We will discuss the 2004 LANL observations of LAGEOS at Starfire Optical Range, the unique software processing methods that allow the high accuracy analysis of the data (the FROID algorithm) and the transformation that allows the use of such data to obtain the orientation of the spin axis of the satellite. We are also planning future observations, including of the nebula surrounding the Crab Pulsar. The rapidly rotating pulsar generates enormous magnetic fields, a synchrotron plasma and stellar winds moving at nearly the velocity of light. Since the useful observations to date rely only on observations of the beamed emission when it points toward the earth, most descriptions of the details of the processes have been largely theoretical. The RULLI camera's continuous sensitivity and high time resolution should enable better signal to noise ratios for observations that may reveal properties like the orientation of the rotational and magnetic axes of the pulsar, the temperature, composition and electrical state of the plasma and effects of the magnetic field.
Applications of Action Cam Sensors in the Archaeological Yard
NASA Astrophysics Data System (ADS)
Pepe, M.; Ackermann, S.; Fregonese, L.; Fassi, F.; Adami, A.
2018-05-01
In recent years, special digital cameras called "action camera" or "action cam", have become popular due to their low price, smallness, lightness, strength and capacity to make videos and photos even in extreme environment surrounding condition. Indeed, these particular cameras have been designed mainly to capture sport actions and work even in case of dirt, bumps, or underwater and at different external temperatures. High resolution of Digital single-lens reflex (DSLR) cameras are usually preferred to be employed in photogrammetric field. Indeed, beyond the sensor resolution, the combination of such cameras with fixed lens with low distortion are preferred to perform accurate 3D measurements; at the contrary, action cameras have small and wide-angle lens, with a lower performance in terms of sensor resolution, lens quality and distortions. However, by considering the characteristics of the action cameras to acquire under conditions that may result difficult for standard DSLR cameras and because of their lower price, these could be taken into consideration as a possible and interesting approach during archaeological excavation activities to document the state of the places. In this paper, the influence of lens radial distortion and chromatic aberration on this type of cameras in self-calibration mode and an evaluation of their application in the field of Cultural Heritage will be investigated and discussed. Using a suitable technique, it has been possible to improve the accuracy of the 3D model obtained by action cam images. Case studies show the quality and the utility of the use of this type of sensor in the survey of archaeological artefacts.
Performance evaluation of a quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Burcher, E. E.; Huck, F. O.; Wall, S. D.; Woehrle, S. B.
1977-01-01
Spatial resolutions achieved with cameras on lunar and planetary landers have been limited to about 1 mm, whereas microscopes of the type proposed for such landers could have obtained resolutions of about 1 um but were never accepted because of their complexity and weight. The quasi-microscope evaluated in this paper could provide intermediate resolutions of about 10 um with relatively simple optics that would augment a camera, such as the Viking lander camera, without imposing special design requirements on the camera of limiting its field of view of the terrain. Images of natural particulate samples taken in black and white and in color show that grain size, shape, and texture are made visible for unconsolidated materials in a 50- to 500-um size range. Such information may provide broad outlines of planetary surface mineralogy and allow inferences to be made of grain origin and evolution. The mineralogical descriptions of single grains would be aided by the reflectance spectra that could, for example, be estimated from the six-channel multispectral data of the Viking lander camera.
Sambot II: A self-assembly modular swarm robot
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Wei, Hongxing; Yang, Bo; Jiang, Cancan
2018-04-01
The new generation of self-assembly modular swarm robot Sambot II, based on the original generation of self-assembly modular swarm robot Sambot, adopting laser and camera module for information collecting, is introduced in this manuscript. The visual control algorithm of Sambot II is detailed and feasibility of the algorithm is verified by the laser and camera experiments. At the end of this manuscript, autonomous docking experiments of two Sambot II robots are presented. The results of experiments are showed and analyzed to verify the feasibility of whole scheme of Sambot II.
Emittance and lifetime measurement with damping wigglers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, G. M.; Shaftan, T., E-mail: shaftan@bnl.gov; Cheng, W. X.
National Synchrotron Light Source II (NSLS-II) is a new third-generation storage ring light source at Brookhaven National Laboratory. The storage ring design calls for small horizontal emittance (<1 nm-rad) and diffraction-limited vertical emittance at 12 keV (8 pm-rad). Achieving low value of the beam size will enable novel user experiments with nm-range spatial and meV-energy resolution. The high-brightness NSLS-II lattice has been realized by implementing 30-cell double bend achromatic cells producing the horizontal emittance of 2 nm rad and then halving it further by using several Damping Wigglers (DWs). This paper is focused on characterization of the DW effects inmore » the storage ring performance, namely, on reduction of the beam emittance, and corresponding changes in the energy spread and beam lifetime. The relevant beam parameters have been measured by the X-ray pinhole camera, beam position monitors, beam filling pattern monitor, and current transformers. In this paper, we compare the measured results of the beam performance with analytic estimates for the complement of the 3 DWs installed at the NSLS-II.« less
Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.
2008-10-01
98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.
Free-form reflective optics for mid-infrared camera and spectrometer on board SPICA
NASA Astrophysics Data System (ADS)
Fujishiro, Naofumi; Kataza, Hirokazu; Wada, Takehiko; Ikeda, Yuji; Sakon, Itsuki; Oyabu, Shinki
2017-11-01
SPICA (Space Infrared Telescope for Cosmology and Astrophysics) is an astronomical mission optimized for mid-and far-infrared astronomy with a cryogenically cooled 3-m class telescope, envisioned for launch in early 2020s. Mid-infrared Camera and Spectrometer (MCS) is a focal plane instrument for SPICA with imaging and spectroscopic observing capabilities in the mid-infrared wavelength range of 5-38μm. MCS consists of two relay optical modules and following four scientific optical modules of WFC (Wide Field Camera; 5'x 5' field of view, f/11.7 and f/4.2 cameras), LRS (Low Resolution Spectrometer; 2'.5 long slits, prism dispersers, f/5.0 and f/1.7 cameras, spectral resolving power R ∼ 50-100), MRS (Mid Resolution Spectrometer; echelles, integral field units by image slicer, f/3.3 and f/1.9 cameras, R ∼ 1100-3000) and HRS (High Resolution Spectrometer; immersed echelles, f/6.0 and f/3.6 cameras, R ∼ 20000-30000). Here, we present optical design and expected optical performance of MCS. Most parts of MCS optics adopt off-axis reflective system for covering the wide wavelength range of 5-38μm without chromatic aberration and minimizing problems due to changes in shapes and refractive indices of materials from room temperature to cryogenic temperature. In order to achieve the high specification requirements of wide field of view, small F-number and large spectral resolving power with compact size, we employed the paraxial and aberration analysis of off-axial optical systems (Araki 2005 [1]) which is a design method using free-form surfaces for compact reflective optics such as head mount displays. As a result, we have successfully designed compact reflective optics for MCS with as-built performance of diffraction-limited image resolution.
Time-of-Flight Microwave Camera
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-01-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598
MPGD for breast cancer prevention: a high resolution and low dose radiation medical imaging
NASA Astrophysics Data System (ADS)
Gutierrez, R. M.; Cerquera, E. A.; Mañana, G.
2012-07-01
Early detection of small calcifications in mammograms is considered the best preventive tool of breast cancer. However, existing digital mammography with relatively low radiation skin exposure has limited accessibility and insufficient spatial resolution for small calcification detection. Micro Pattern Gaseous Detectors (MPGD) and associated technologies, increasingly provide new information useful to generate images of microscopic structures and make more accessible cutting edge technology for medical imaging and many other applications. In this work we foresee and develop an application for the new information provided by a MPGD camera in the form of highly controlled images with high dynamical resolution. We present a new Super Detail Image (S-DI) that efficiently profits of this new information provided by the MPGD camera to obtain very high spatial resolution images. Therefore, the method presented in this work shows that the MPGD camera with SD-I, can produce mammograms with the necessary spatial resolution to detect microcalcifications. It would substantially increase efficiency and accessibility of screening mammography to highly improve breast cancer prevention.
Time-of-Flight Microwave Camera
NASA Astrophysics Data System (ADS)
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
High resolution bone mineral densitometry with a gamma camera
NASA Technical Reports Server (NTRS)
Leblanc, A.; Evans, H.; Jhingran, S.; Johnson, P.
1983-01-01
A technique by which the regional distribution of bone mineral can be determined in bone samples from small animals is described. The technique employs an Anger camera interfaced to a medical computer. High resolution imaging is possible by producing magnified images of the bone samples. Regional densitometry of femurs from oophorectomised and bone mineral loss.
HST image restoration: A comparison of pre- and post-servicing mission results
NASA Technical Reports Server (NTRS)
Hanisch, R. J.; Mo, J.
1992-01-01
A variety of image restoration techniques (e.g., Wiener filter, Lucy-Richardson, MEM) have been applied quite successfully to the aberrated HST images. The HST servicing mission (scheduled for late 1993 or early 1994) will install a corrective optics system (COSTAR) for the Faint Object Camera and spectrographs and replace the Wide Field/Planetary Camera with a second generation instrument (WF/PC-II) having its own corrective elements. The image quality is expected to be improved substantially with these new instruments. What then is the role of image restoration for the HST in the long term? Through a series of numerical experiments using model point-spread functions for both aberrated and unaberrated optics, we find that substantial improvements in image resolution can be obtained for post-servicing mission data using the same or similar algorithms as being employed now to correct aberrated images. Included in our investigations are studies of the photometric integrity of the restoration algorithms and explicit models for HST pointing errors (spacecraft jitter).
SU-E-J-197: Investigation of Microsoft Kinect 2.0 Depth Resolution for Patient Motion Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverstein, E; Snyder, M
2015-06-15
Purpose: Investigate the use of the Kinect 2.0 for patient motion tracking during radiotherapy by studying spatial and depth resolution capabilities. Methods: Using code written in C#, depth map data was abstracted from the Kinect to create an initial depth map template indicative of the initial position of an object to be compared to the depth map of the object over time. To test this process, simple setup was created in which two objects were imaged: a 40 cm × 40 cm board covered in non reflective material and a 15 cm × 26 cm textbook with a slightly reflective,more » glossy cover. Each object, imaged and measured separately, was placed on a movable platform with object to camera distance measured. The object was then moved a specified amount to ascertain whether the Kinect’s depth camera would visualize the difference in position of the object. Results: Initial investigations have shown the Kinect depth resolution is dependent on the object to camera distance. Measurements indicate that movements as small as 1 mm can be visualized for objects as close as 50 cm away. This depth resolution decreases linearly with object to camera distance. At 4 m, the depth resolution had decreased to observe a minimum movement of 1 cm. Conclusion: The improved resolution and advanced hardware of the Kinect 2.0 allows for increase of depth resolution over the Kinect 1.0. Although obvious that the depth resolution should decrease with increasing distance from an object given the decrease in number of pixels representing said object, the depth resolution at large distances indicates its usefulness in a clinical setting.« less
Study of a quasi-microscope design for planetary landers
NASA Technical Reports Server (NTRS)
Giat, O.; Brown, E. B.
1973-01-01
The Viking Lander fascimile camera, in its present form, provides for a minimum object distance of 1.9 meters, at which distance its resolution of 0.0007 radian provides an object resolution of 1.33 millimeters. It was deemed desirable, especially for follow-on Viking missions, to provide means for examing Martian terrain at resolutions considerably higher than that now provided. This led to the concept of quasi-microscope, an attachment to be used in conjunction with the fascimile camera to convert it to a low power microscope. The results are reported of an investigation to consider alternate optical configurations for the quasi-microscope and to develop optical designs for the selected system or systems. Initial requirements included consideration of object resolutions in the range of 2 to 50 micrometers, an available field of view of the order of 500 pixels, and no significant modifications to the fascimile camera.
A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares
NASA Technical Reports Server (NTRS)
Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.
1989-01-01
Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.
Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data
NASA Astrophysics Data System (ADS)
Hung, C. H.; Chang, W. C.; Chen, L. C.
2016-06-01
With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.
Zodiac II: Debris Disk Science from a Balloon
NASA Technical Reports Server (NTRS)
Bryden, Geoffrey; Traub, Wesley; Roberts, Lewis C., Jr.; Bruno, Robin; Unwin, Stephen; Backovsky, Stan; Brugarolas, Paul; Chakrabarti, Supriya; Chen, Pin; Hillenbrand, Lynne;
2011-01-01
Zodiac II is a proposed balloon-borne science investigation of debris disks around nearby stars. Debris disks are analogs of the Asteroid Belt (mainly rocky) and Kuiper Belt (mainly icy) in our Solar System. Zodiac II will measure the size, shape, brightness, and color of a statistically significant sample of disks. These measurements will enable us to probe these fundamental questions: what do debris disks tell us about the evolution of planetary systems; how are debris disks produced; how are debris disks shaped by planets; what materials are debris disks made of; how much dust do debris disks make sa they grind down; and how long do debris disks live? In addition, Zodiac II will observe hot, young exoplanets as targets of opportunity. The Zodiac II instrument is a 1.1-m diameter SiC telescope and an imaging coronagraph on a gondola carried by a stratospheric balloon. Its data product is a set of images of each targeted debris disk in four broad visible wavelength bands. Zodiac II will address its science questions by taking high-resolution, multi-wavelength images of the debris disks around tens of nearby stars. Mid-latitude flights are considered: overnight test flights within the United States followed by half-global flights in the Southern Hemisphere. These longer flights are required to fully explore the set of known debris disks accessible only to Zodiac II. On these targets, it will be 100 times more sensitive than the Hubble Space Telescope's Advanced Camera for Surveys (HST/ACS); no existing telescope can match the Zodiac II contrast and resolution performance. A second objective of Zodiac II is to use the near-space environment to raise the Technology Readiness Level (TRL) of SiC mirrors, internal coronagraphs, deformable mirrors, and wavefront sensing and control, all potentially needed for a future space-based telescope for high-contrast exoplanet imaging.
Zodiac II: Debris Disk Science from a Balloon
NASA Technical Reports Server (NTRS)
Bryden, Geoffrey; Traub, Wesley; Roberts, Lewis C., Jr.; Bruno, Robin; Unwin, Stephen; Backovsky, Stan; Brugarolas, Paul; Chakrabarti, Supriya; Chen, Pin; Hillenbrand, Lynne;
2011-01-01
Zodiac II is a proposed balloon-borne science investigation of debris disks around nearby stars. Debris disks are analogs of the Asteroid Belt (mainly rocky) and Kuiper Belt (mainly icy) in our Solar System. Zodiac II will measure the size, shape, brightness, and color of a statistically significant sample of disks. These measurements will enable us to probe these fundamental questions: what do debris disks tell us about the evolution of planetary systems; how are debris disks produced; how are debris disks shaped by planets; what materials are debris disks made of; how much dust do debris disks make as they grind down; and how long do debris disks live? In addition, Zodiac II will observe hot, young exoplanets as targets of opportunity. The Zodiac II instrument is a 1.1-m diameter SiC (Silicone carbide) telescope and an imaging coronagraph on a gondola carried by a stratospheric balloon. Its data product is a set of images of each targeted debris disk in four broad visible-wavelength bands. Zodiac II will address its science questions by taking high-resolution, multi-wavelength images of the debris disks around tens of nearby stars. Mid-latitude flights are considered: overnight test flights in the US followed by half-global flights in the Southern Hemisphere. These longer flights are required to fully explore the set of known debris disks accessible only to Zodiac II. On these targets, it will be 100 times more sensitive than the Hubble Space Telescope's Advanced Camera for Surveys (HST/ACS); no existing telescope can match the Zodiac II contrast and resolution performance. A second objective of Zodiac II is to use the near-space environment to raise the Technology Readiness Level (TRL) of SiC mirrors, internal coronagraphs, deformable mirrors, and wavefront sensing and control, all potentially needed for a future space-based telescope for high-contrast exoplanet imaging.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
Continuous All-Sky Cloud Measurements: Cloud Fraction Analysis Based on a Newly Developed Instrument
NASA Astrophysics Data System (ADS)
Aebi, C.; Groebner, J.; Kaempfer, N.; Vuilleumier, L.
2017-12-01
Clouds play an important role in the climate system and are also a crucial parameter for the Earth's surface energy budget. Ground-based measurements of clouds provide data in a high temporal resolution in order to quantify its influence on radiation. The newly developed all-sky cloud camera at PMOD/WRC in Davos (Switzerland), the infrared cloud camera (IRCCAM), is a microbolometer sensitive in the 8 - 14 μm wavelength range. To get all-sky information the camera is located on top of a frame looking downward on a spherical gold-plated mirror. The IRCCAM has been measuring continuously (day and nighttime) with a time resolution of one minute in Davos since September 2015. To assess the performance of the IRCCAM, two different visible all-sky cameras (Mobotix Q24M and Schreder VIS-J1006), which can only operate during daytime, are installed in Davos. All three camera systems have different software for calculating fractional cloud coverage from images. Our study analyzes mainly the fractional cloud coverage of the IRCCAM and compares it with the fractional cloud coverage calculated from the two visible cameras. Preliminary results of the measurement accuracy of the IRCCAM compared to the visible camera indicate that 78 % of the data are within ± 1 octa and even 93 % within ± 2 octas. An uncertainty of 1-2 octas corresponds to the measurement uncertainty of human observers. Therefore, the IRCCAM shows similar performance in detection of cloud coverage as the visible cameras and the human observers, with the advantage that continuous measurements with high temporal resolution are possible.
Minimum Requirements for Taxicab Security Cameras*
Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene
2015-01-01
Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
Development and calibration of a new gamma camera detector using large square Photomultiplier Tubes
NASA Astrophysics Data System (ADS)
Zeraatkar, N.; Sajedi, S.; Teimourian Fard, B.; Kaviani, S.; Akbarzadeh, A.; Farahani, M. H.; Sarkar, S.; Ay, M. R.
2017-09-01
Large area scintillation detectors applied in gamma cameras as well as Single Photon Computed Tomography (SPECT) systems, have a major role in in-vivo functional imaging. Most of the gamma detectors utilize hexagonal arrangement of Photomultiplier Tubes (PMTs). In this work we applied large square-shaped PMTs with row/column arrangement and positioning. The Use of large square PMTs reduces dead zones in the detector surface. However, the conventional center of gravity method for positioning may not introduce an acceptable result. Hence, the digital correlated signal enhancement (CSE) algorithm was optimized to obtain better linearity and spatial resolution in the developed detector. The performance of the developed detector was evaluated based on NEMA-NU1-2007 standard. The acquired images using this method showed acceptable uniformity and linearity comparing to three commercial gamma cameras. Also the intrinsic and extrinsic spatial resolutions with low-energy high-resolution (LEHR) collimator at 10 cm from surface of the detector were 3.7 mm and 7.5 mm, respectively. The energy resolution of the camera was measured 9.5%. The performance evaluation demonstrated that the developed detector maintains image quality with a reduced number of used PMTs relative to the detection area.
640 x 480 MWIR and LWIR camera system developments
NASA Astrophysics Data System (ADS)
Tower, John R.; Villani, Thomas S.; Esposito, Benjamin J.; Gilmartin, Harvey R.; Levine, Peter A.; Coyle, Peter J.; Davis, Timothy J.; Shallcross, Frank V.; Sauer, Donald J.; Meyerhofer, Dietrich
1993-01-01
The performance of a 640 x 480 PtSi, 3,5 microns (MWIR), Stirling cooled camera system with a minimum resolvable temperature of 0.03 is considered. A preliminary specification of a full-TV resolution PtSi radiometer was developed using the measured performance characteristics of the Stirling cooled camera. The radiometer is capable of imaging rapid thermal transients from 25 to 250 C with better than 1 percent temperature resolution. This performance is achieved using the electronic exposure control capability of the MOS focal plane array (FPA). A liquid nitrogen cooled camera with an eight-position filter wheel has been developed using the 640 x 480 PtSi FPA. Low thermal mass packaging for the FPA was developed for Joule-Thomson applications.
640 x 480 MWIR and LWIR camera system developments
NASA Astrophysics Data System (ADS)
Tower, J. R.; Villani, T. S.; Esposito, B. J.; Gilmartin, H. R.; Levine, P. A.; Coyle, P. J.; Davis, T. J.; Shallcross, F. V.; Sauer, D. J.; Meyerhofer, D.
The performance of a 640 x 480 PtSi, 3,5 microns (MWIR), Stirling cooled camera system with a minimum resolvable temperature of 0.03 is considered. A preliminary specification of a full-TV resolution PtSi radiometer was developed using the measured performance characteristics of the Stirling cooled camera. The radiometer is capable of imaging rapid thermal transients from 25 to 250 C with better than 1 percent temperature resolution. This performance is achieved using the electronic exposure control capability of the MOS focal plane array (FPA). A liquid nitrogen cooled camera with an eight-position filter wheel has been developed using the 640 x 480 PtSi FPA. Low thermal mass packaging for the FPA was developed for Joule-Thomson applications.
Hammerle, Albin; Meier, Fred; Heinl, Michael; Egger, Angelika; Leitinger, Georg
2017-04-01
Thermal infrared (TIR) cameras perfectly bridge the gap between (i) on-site measurements of land surface temperature (LST) providing high temporal resolution at the cost of low spatial coverage and (ii) remotely sensed data from satellites that provide high spatial coverage at relatively low spatio-temporal resolution. While LST data from satellite (LST sat ) and airborne platforms are routinely corrected for atmospheric effects, such corrections are barely applied for LST from ground-based TIR imagery (using TIR cameras; LST cam ). We show the consequences of neglecting atmospheric effects on LST cam of different vegetated surfaces at landscape scale. We compare LST measured from different platforms, focusing on the comparison of LST data from on-site radiometry (LST osr ) and LST cam using a commercially available TIR camera in the region of Bozen/Bolzano (Italy). Given a digital elevation model and measured vertical air temperature profiles, we developed a multiple linear regression model to correct LST cam data for atmospheric influences. We could show the distinct effect of atmospheric conditions and related radiative processes along the measurement path on LST cam , proving the necessity to correct LST cam data on landscape scale, despite their relatively low measurement distances compared to remotely sensed data. Corrected LST cam data revealed the dampening effect of the atmosphere, especially at high temperature differences between the atmosphere and the vegetated surface. Not correcting for these effects leads to erroneous LST estimates, in particular to an underestimation of the heterogeneity in LST, both in time and space. In the most pronounced case, we found a temperature range extension of almost 10 K.
VizieR Online Data Catalog: Galactic CHaMP. II. Dense gas clumps. (Ma+, 2013)
NASA Astrophysics Data System (ADS)
Ma, B.; Tan, J. C.; Barnes, P. J.
2015-04-01
A total of 303 dense gas clumps have been detected using the HCO+(1-0) line in the CHaMP survey (Paper I, Barnes et al. 2011, J/ApJS/196/12). In this article we have derived the SED for these clumps using Spitzer, MSX, and IRAS data. The Midcourse Space Experiment (MSX) was launched in 1996 April. It conducted a Galactic plane survey (0
Streak camera receiver definition study
NASA Technical Reports Server (NTRS)
Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.
1990-01-01
Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.
NASA Astrophysics Data System (ADS)
Brown, T.; Borevitz, J. O.; Zimmermann, C.
2010-12-01
We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually retrieved at bi-weekly intervals. Our longer-term goal is to make gigapixel time-lapse datasets available online in an interactive interface that layers plant-level phenology data with gigapixel resolution images, genomic sequence data from individual plants with weather and other abitotic sensor data. Co-visualization of all of these data types provides researchers with a powerful new tool for examining complex ecological interactions across scales from the individual to the ecosystem. We will present detailed phenostage data from more than 100 plants of multiple species from our Gigavision timelapse camera at our “Big Blowout East” field site in the Indiana Dunes State Park, IN. This camera has been recording three to four 700 million pixel images a day since February 28, 2010. The camera field of view covers an area of about 7 hectares resulting in an average image resolution of about 1 pixel per centimeter over the entire site. We will also discuss some of the many technological challenges with developing and maintaining these types of hardware systems, collecting quantitative data from gigapixel resolution time-lapse data and effectively managing terabyte-sized datasets of millions of images.
Thermographic measurements of high-speed metal cutting
NASA Astrophysics Data System (ADS)
Mueller, Bernhard; Renz, Ulrich
2002-03-01
Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.
Medium-sized aperture camera for Earth observation
NASA Astrophysics Data System (ADS)
Kim, Eugene D.; Choi, Young-Wan; Kang, Myung-Seok; Kim, Ee-Eul; Yang, Ho-Soon; Rasheed, Ad. Aziz Ad.; Arshad, Ahmad Sabirin
2017-11-01
Satrec Initiative and ATSB have been developing a medium-sized aperture camera (MAC) for an earth observation payload on a small satellite. Developed as a push-broom type high-resolution camera, the camera has one panchromatic and four multispectral channels. The panchromatic channel has 2.5m, and multispectral channels have 5m of ground sampling distances at a nominal altitude of 685km. The 300mm-aperture Cassegrain telescope contains two aspheric mirrors and two spherical correction lenses. With a philosophy of building a simple and cost-effective camera, the mirrors incorporate no light-weighting, and the linear CCDs are mounted on a single PCB with no beam splitters. MAC is the main payload of RazakSAT to be launched in 2005. RazakSAT is a 180kg satellite including MAC, designed to provide high-resolution imagery of 20km swath width on a near equatorial orbit (NEqO). The mission objective is to demonstrate the capability of a high-resolution remote sensing satellite system on a near equatorial orbit. This paper describes the overview of the MAC and RarakSAT programmes, and presents the current development status of MAC focusing on key optical aspects of Qualification Model.
Digital Camera Control for Faster Inspection
NASA Technical Reports Server (NTRS)
Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel
2009-01-01
Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.
Impact of laser phase and amplitude noises on streak camera temporal resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wlotzko, V., E-mail: wlotzko@optronis.com; Optronis GmbH, Ludwigstrasse 2, 77694 Kehl; Uhring, W.
2015-09-15
Streak cameras are now reaching sub-picosecond temporal resolution. In cumulative acquisition mode, this resolution does not entirely rely on the electronic or the vacuum tube performances but also on the light source characteristics. The light source, usually an actively mode-locked laser, is affected by phase and amplitude noises. In this paper, the theoretical effects of such noises on the synchronization of the streak system are studied in synchroscan and triggered modes. More precisely, the contribution of band-pass filters, delays, and time walk is ascertained. Methods to compute the resulting synchronization jitter are depicted. The results are verified by measurement withmore » a streak camera combined with a Ti:Al{sub 2}O{sub 3} solid state laser oscillator and also a fiber oscillator.« less
Speed of sound and photoacoustic imaging with an optical camera based ultrasound detection system
NASA Astrophysics Data System (ADS)
Nuster, Robert; Paltauf, Guenther
2017-07-01
CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.
4K Video of Colorful Liquid in Space
2015-10-09
Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.
Performance of Laser Megajoule’s x-ray streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuber, C., E-mail: celine.zuber@cea.fr; Bazzoli, S.; Brunel, P.
2016-11-15
A prototype of a picosecond x-ray streak camera has been developed and tested by Commissariat à l’Énergie Atomique et aux Énergies Alternatives to provide plasma-diagnostic support for the Laser Megajoule. We report on the measured performance of this streak camera, which almost fulfills the requirements: 50-μm spatial resolution over a 15-mm field in the photocathode plane, 17-ps temporal resolution in a 2-ns timebase, a detection threshold lower than 625 nJ/cm{sup 2} in the 0.05–15 keV spectral range, and a dynamic range greater than 100.
High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications
NASA Astrophysics Data System (ADS)
Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.
2012-07-01
The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.
High-Resolution Scintimammography: A Pilot Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rachel F. Brem; Joelle M. Schoonjans; Douglas A. Kieper
2002-07-01
This study evaluated a novel high-resolution breast-specific gamma camera (HRBGC) for the detection of suggestive breast lesions. Methods: Fifty patients (with 58 breast lesions) for whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a novel HRBGC prototype. The results of conventional and high-resolution nuclear studies were prospectively classified as negative (normal or benign) or positive (suggestive or malignant) by 2 radiologists who were unaware of the mammographic and histologic results. All of the included lesions were confirmed by pathology. Results: There were 30 benign and 28 malignant lesions. The sensitivity for detection ofmore » breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. The specificity with both systems was 93.3% (28/30). For the 18 nonpalpable lesions, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and the HRBGC, respectively. For lesions 1 cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four lesions (median size, 8.5 mm) were detected only with the HRBGC and were missed by the conventional camera. Conclusion: Evaluation of indeterminate breast lesions with an HRBGC results in improved sensitivity for the detection of cancer, with greater improvement shown for nonpalpable and 1-cm lesions.« less
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
Instrumentation in molecular imaging.
Wells, R Glenn
2016-12-01
In vivo molecular imaging is a challenging task and no single type of imaging system provides an ideal solution. Nuclear medicine techniques like SPECT and PET provide excellent sensitivity but have poor spatial resolution. Optical imaging has excellent sensitivity and spatial resolution, but light photons interact strongly with tissues and so only small animals and targets near the surface can be accurately visualized. CT and MRI have exquisite spatial resolution, but greatly reduced sensitivity. To overcome the limitations of individual modalities, molecular imaging systems often combine individual cameras together, for example, merging nuclear medicine cameras with CT or MRI to allow the visualization of molecular processes with both high sensitivity and high spatial resolution.
High-resolution mini gamma camera for diagnosis and radio-guided surgery in diabetic foot infection
NASA Astrophysics Data System (ADS)
Scopinaro, F.; Capriotti, G.; Di Santo, G.; Capotondi, C.; Micarelli, A.; Massari, R.; Trotta, C.; Soluri, A.
2006-12-01
The diagnosis of diabetic foot osteomyelitis is often difficult. 99mTc-WBC (White Blood Cell) scintigraphy plays a key role in the diagnosis of bone infections. Spatial resolution of Anger camera is not always able to differentiate soft tissue from bone infection. Aim of present study is to verify if HRD (High-Resolution Detector) is able to improve diagnosis and to help surgery. Patients were studied by HRD showing 25.7×25.7 mm 2 FOV, 2 mm spatial resolution and 18% energy resolution. The patients were underwent to surgery and, when necessary, bone biopsy, both guided by HRD. Four patients were positive at Anger camera without specific signs of osteomyelitis. HRS (High-Resolution Scintigraphy) showed hot spots in the same patients. In two of them the hot spot was bar-shaped and it was localized in correspondence of the small phalanx. The presence of bone infection was confirmed at surgery, which was successfully guided by HRS. 99mTc-WBC HRS was able to diagnose pedal infection and to guide the surgery of diabetic foot, opening a new way in the treatment of infected diabetic foot.
Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.
2017-01-01
Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.
2014-06-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. We demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. We discuss some physical experiments, in which a person drinks hot, and warm, and cold water and he eats. After computer processing of images captured by passive THz camera TS4 we may see the pronounced temperature trace on skin of the human body. For proof of validity of our statement we make the similar physical experiment using the IR camera. Our investigation allows to increase field of the passive THz camera using for the detection of objects concealed in the human body because the difference in temperature between object and parts of human body will be reflected on the human skin. However, modern passive THz cameras have not enough resolution in a temperature to see this difference. That is why, we use computer processing to enhance the camera resolution for this application. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp.
Endockscope: using mobile technology to create global point of service endoscopy.
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B; Dash, Atreya; Clayman, Ralph; Lee, Hak J
2013-09-01
Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy.
Endockscope: Using Mobile Technology to Create Global Point of Service Endoscopy
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B.; Dash, Atreya; Clayman, Ralph
2013-01-01
Abstract Background and Purpose Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Materials and Methods Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Results Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Conclusion Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy. PMID:23701228
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
An Overview of the HST Advanced Camera for Surveys' On-orbit Performance
NASA Astrophysics Data System (ADS)
Hartig, G. F.; Ford, H. C.; Illingworth, G. D.; Clampin, M.; Bohlin, R. C.; Cox, C.; Krist, J.; Sparks, W. B.; De Marchi, G.; Martel, A. R.; McCann, W. J.; Meurer, G. R.; Sirianni, M.; Tsvetanov, Z.; Bartko, F.; Lindler, D. J.
2002-05-01
The Advanced Camera for Surveys (ACS) was installed in the HST on 7 March 2002 during the fourth servicing mission to the observatory, and is now beginning science operations. The ACS provides HST observers with a considerably more sensitive, higher-resolution camera with wider field and polarimetric, coronagraphic, low-resolution spectrographic and solar-blind FUV capabilities. We review selected results of the early verification and calibration program, comparing the achieved performance with the advertised specifications. Emphasis is placed on the optical characteristics of the camera, including image quality, throughput, geometric distortion and stray-light performance. More detailed analyses of various aspects of the ACS performance are presented in other papers at this meeting. This work was supported by a NASA contract and a NASA grant.
Rogers, B.T. Jr.; Davis, W.C.
1957-12-17
This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
Employing unmanned aerial vehicle to monitor the health condition of wind turbines
NASA Astrophysics Data System (ADS)
Huang, Yishuo; Chiang, Chih-Hung; Hsu, Keng-Tsang; Cheng, Chia-Chi
2018-04-01
Unmanned aerial vehicle (UAV) can gather the spatial information of huge structures, such as wind turbines, that can be difficult to obtain with traditional approaches. In this paper, the UAV used in the experiments is equipped with high resolution camera and thermal infrared camera. The high resolution camera can provide a series of images with resolution up to 10 Megapixels. Those images can be used to form the 3D model using the digital photogrammetry technique. By comparing the 3D scenes of the same wind turbine at different times, possible displacement of the supporting tower of the wind turbine, caused by ground movement or foundation deterioration may be determined. The recorded thermal images are analyzed by applying the image segmentation methods to the surface temperature distribution. A series of sub-regions are separated by the differences of the surface temperature. The high-resolution optical image and the segmented thermal image are fused such that the surface anomalies are more easily identified for wind turbines.
Measuring the spatial resolution of an optical system in an undergraduate optics laboratory
NASA Astrophysics Data System (ADS)
Leung, Calvin; Donnelly, T. D.
2017-06-01
Two methods of quantifying the spatial resolution of a camera are described, performed, and compared, with the objective of designing an imaging-system experiment for students in an undergraduate optics laboratory. With the goal of characterizing the resolution of a typical digital single-lens reflex (DSLR) camera, we motivate, introduce, and show agreement between traditional test-target contrast measurements and the technique of using Fourier analysis to obtain the modulation transfer function (MTF). The advantages and drawbacks of each method are compared. Finally, we explore the rich optical physics at work in the camera system by calculating the MTF as a function of wavelength and f-number. For example, we find that the Canon 40D demonstrates better spatial resolution at short wavelengths, in accordance with scalar diffraction theory, but is not diffraction-limited, being significantly affected by spherical aberration. The experiment and data analysis routines described here can be built and written in an undergraduate optics lab setting.
VizieR Online Data Catalog: Kinematic analysis of M7-L8 dwarfs (Faherty+, 2016)
NASA Astrophysics Data System (ADS)
Faherty, J. K.; Riedel, A. R.; Cruz, K. L.; Gagne, J.; Filippazzo, J. C.; Lambrides, E.; Fica, H.; Weinberger, A.; Thorstensen, J. R.; Tinney, C. G.; Baldassare, V.; Lemonier, E.; Rice, E. L.
2017-08-01
The sample of 152 M7-L8 ultra-cool dwarfs comprising our sample (see table 1) were placed on follow-up programs --either imaging (parallax, proper motion), spectroscopy (radial velocity), or both-- to determine kinematic membership in a nearby moving group. For Northern Hemisphere astrometry targets, we obtained I-band images with the MDM Observatory 2.4m Hiltner telescope on Kitt Peak, Arizona. We also observed 16 of the most southern targets with the Carnegie Astrometric Planet Search Camera (CAPSCam) on the 100 inch du Pont telescope and 5 with the FourStar imaging camera on the Magellan Baade Telescope. Table 2 gives the pertinent astrometric information; Table 3 lists our measured parallaxes and proper motions. New proper-motion measurements are listed in Table 5. See also section 3.1. We used the 6.5m Baade Magellan telescope and the FIRE spectrograph to obtain near-infrared spectra of 36 sources. Observations were made over seven runs between 2013 July and 2014 September. All observations (resolution R~6000) cover the full 0.8-2.5um band with a spatial resolution of 0.18"/pixel. We used the 3m NASA IRTF to obtain low-resolution near-infrared spectroscopy for 10 targets. All of the observations were aligned to the parallactic angle to obtain R~120 spectral data over the wavelength range of 0.7-2.5um. We used the Triple Spectrograph (TSpec) at the 5m Hale Telescope at Palomar Observatory to obtain near-infrared spectra of two targets. TSpec covers simultaneously the range from 1.0 to 2.45um and achieves a resolution of ~2500. Exposure times for each source and the number of images acquired are listed in Table 6; see also section 3.2. Multiple observations of 17 sources were taken in high-resolution mode on Keck II on 2008 September 14-16, using the NIRSPEC-5 filter to obtain H-band spectra in Order 49 (1.545-1.570um ). Observations of 18 sources were taken during semesters 2007B and 2009B using the H6420 filter of the Phoenix instrument (previously on Gemini South) to select H-band spectra in Order 36 (1.551-1.558um). Red-side spectra were taken with MIKE on the Magellan II (Clay) telescope on 2006 July 4, 2006 November 1 and 2 for 17 targets. Observational data for each source are listed in Table 7; see also section 3.3. (10 data files).
HUBBLE'S IMPROVED OPTICS REVEAL INCREDIBLE DETAIL IN GIANT CLOUD OF GAS AND DUS
NASA Technical Reports Server (NTRS)
2002-01-01
An image of a star-forming region in the 30 Doradus nebula, surrounding the dense star cluster R136. The image was obtained using the second generation Wide Field and Planetary Camera (WFPC-2), installed in the Hubble Space Telescope during the STS-61 Servicing Mission. The WFPC-2 contains modified optics to correct for the aberration of the Hubble's primary mirror. The new optics will allow the telescope to tackle many of the most important scientific programs for which the K was built, but had to be temporarily shelved with the discovery of the spherical aberration in 1990. The large picture shows a mosaic of the images taken with WFPC-2s four separate cameras. Three of the cameras, called the Wide Field Cameras, give HST Hs 'panoramic' view of astronomical objects. A fourth camera, called the Planetary Camera, has a smaller field of view but provides better spatial resolution. The image shows the fields of view of the four cameras combined into a 'chevron' shape, the hallmark of WFPC-2 data. The image shows a portion of a giant cloud of gas and dust in 30 Doradus, which is located in a small neighboring galaxy called the Large Magellanic Cloud about 160,000 light years away from us. The cloud is called an H II region because it is made up primarily of ionized hydrogen excited by ultraviolet light from hot stars. This is an especially interesting H II region because unlike nearby objects which are lit up by only a few stars, such as the Orion Nebula, 30 Doradus is the result of the combined efforts of hundreds of the brightest and most massive stars known. The inset shows a blowup of the star cluster, called R136. Even at the distance to 30 Doradus, WFPC-2's resolution allows objects as small as 25 light days across to be distinguished from their surroundings, revealing the effect of the hot stars on the surrounding gas in unprecedented detail. (For comparison, our solar system is about half a light day across, while the distance to the nearest star beyond the Sun is 4.3 light years.) Once thought to consist of a fairly small number of supermassive stars, R136 was resolved from the ground using 'speckle' techniques into a handful of central objects. Prior to the servicing mission, HST resolved R136 into several hundred stars. Now, preliminary analysis of the images obtained with the WFPC-2 shows that R136 consists of more than 3000 stars with brightness and colors that can be accurately measured. It is these measurements that will provide astronomers with new insights into how clouds of gas suddenly turn into large aggregations of stars. These insights will help astronomers understand how stars in our own Galaxy formed, as well as providing clues about how to interpret observations of distant galaxies which are still in the process of forming. For example, the new data show that at least in the case of R136, stars with masses less than that of our Sun were able to form as rapidly as very massive stars, qualifying this as a true starburst. PHOTO RELEASE NO.: STScI-PR94-04
A filter spectrometer concept for facsimile cameras
NASA Technical Reports Server (NTRS)
Jobson, D. J.; Kelly, W. L., IV; Wall, S. D.
1974-01-01
A concept which utilizes interference filters and photodetector arrays to integrate spectrometry with the basic imagery function of a facsimile camera is described and analyzed. The analysis considers spectral resolution, instantaneous field of view, spectral range, and signal-to-noise ratio. Specific performance predictions for the Martian environment, the Viking facsimile camera design parameters, and a signal-to-noise ratio for each spectral band equal to or greater than 256 indicate the feasibility of obtaining a spectral resolution of 0.01 micrometers with an instantaneous field of view of about 0.1 deg in the 0.425 micrometers to 1.025 micrometers range using silicon photodetectors. A spectral resolution of 0.05 micrometers with an instantaneous field of view of about 0.6 deg in the 1.0 to 2.7 micrometers range using lead sulfide photodetectors is also feasible.
NASA Astrophysics Data System (ADS)
Kimm, H.; Guan, K.; Luo, Y.; Peng, J.; Mascaro, J.; Peng, B.
2017-12-01
Monitoring crop growth conditions is of primary interest to crop yield forecasting, food production assessment, and risk management of individual farmers and agribusiness. Despite its importance, there are limited access to field level crop growth/condition information in the public domain. This scarcity of ground truth data also hampers the use of satellite remote sensing for crop monitoring due to the lack of validation. Here, we introduce a new camera network (CropInsight) to monitor crop phenology, growth, and conditions that are designed for the US Corn Belt landscape. Specifically, this network currently includes 40 sites (20 corn and 20 soybean fields) across southern half of the Champaign County, IL ( 800 km2). Its wide distribution and automatic operation enable the network to capture spatiotemporal variations of crop growth condition continuously at the regional scale. At each site, low-maintenance, and high-resolution RGB digital cameras are set up having a downward view from 4.5 m height to take continuous images. In this study, we will use these images and novel satellite data to construct daily LAI map of the Champaign County at 30 m spatial resolution. First, we will estimate LAI from the camera images and evaluate it using the LAI data collected from LAI-2200 (LI-COR, Lincoln, NE). Second, we will develop relationships between the camera-based LAI estimation and vegetation indices derived from a newly developed MODIS-Landsat fusion product (daily, 30 m resolution, RGB + NIR + SWIR bands) and the Planet Lab's high-resolution satellite data (daily, 5 meter, RGB). Finally, we will scale up the above relationships to generate high spatiotemporal resolution crop LAI map for the whole Champaign County. The proposed work has potentials to expand to other agro-ecosystems and to the broader US Corn Belt.
Multiple-aperture optical design for micro-level cameras using 3D-printing method
NASA Astrophysics Data System (ADS)
Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung
2018-02-01
The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.
Optimal design and critical analysis of a high resolution video plenoptic demonstrator
NASA Astrophysics Data System (ADS)
Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne
2011-03-01
A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.
Optimal design and critical analysis of a high-resolution video plenoptic demonstrator
NASA Astrophysics Data System (ADS)
Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne
2012-01-01
A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.
Temperature resolution enhancing of commercially available IR camera using computer processing
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2015-09-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Using such THz camera, one can see a temperature difference on the human skin if this difference is caused by different temperatures inside the body. Because the passive THz camera is very expensive, we try to use the IR camera for observing of such phenomenon. We use a computer code that is available for treatment of the images captured by commercially available IR camera, manufactured by Flir Corp. Using this code we demonstrate clearly changing of human body skin temperature induced by water drinking. Nevertheless, in some cases it is necessary to use additional computer processing to show clearly changing of human body temperature. One of these approaches is developed by us. We believe that we increase ten times (or more) the temperature resolution of such camera. Carried out experiments can be used for solving the counter-terrorism problem and for medicine problems solving. Shown phenomenon is very important for the detection of forbidden objects and substances concealed inside the human body using non-destructive control without X-ray application. Early we have demonstrated such possibility using THz radiation.
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors
NASA Astrophysics Data System (ADS)
Han, Ling
Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.
USDA-ARS?s Scientific Manuscript database
Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original an...
Developments in mercuric iodide gamma ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patt, B.E.; Beyerle, A.G.; Dolin, R.C.
A mercuric iodide gamma-ray imaging array and camera system previously described has been characterized for spatial and energy resolution. Based on this data a new camera is being developed to more fully exploit the potential of the array. Characterization results and design criterion for the new camera will be presented. 2 refs., 7 figs.
NASA Astrophysics Data System (ADS)
Williams, B. P.; Kjellstrand, B.; Jones, G.; Reimuller, J. D.; Fritts, D. C.; Miller, A.; Geach, C.; Limon, M.; Hanany, S.; Kaifler, B.; Wang, L.; Taylor, M. J.
2017-12-01
PMC-Turbo is a NASA long-duration, high-altitude balloon mission that will deploy 7 high-resolution cameras to image polar mesospheric clouds (PMC) and measure gravity wave breakdown and turbulence. The mission has been enhanced by the addition of the DLR Balloon Lidar Experiment (BOLIDE) and an OH imager from Utah State University. This instrument suite will provide high horizontal and vertical resolution of the wave-modified PMC structure along a several thousand kilometer flight track. We have requested a flight from Kiruna, Sweden to Canada in June 2017 or McMurdo Base, Antarctica in Dec 2017. Three of the PMC camera systems were deployed on an aircraft and two tomographic ground sites for the High Level campaign in Canada in June/July 2017. On several nights the cameras observed PMC's with strong gravity wave breaking signatures. One PMC camera will piggyback on the Super Tiger mission scheduled to be launched in Dec 2017 from McMurdo, so we will obtain PMC images and wave/turbulence data from both the northern and southern hemispheres.
Barrier Coverage for 3D Camera Sensor Networks
Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao
2017-01-01
Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder’s face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks. PMID:28771167
Barrier Coverage for 3D Camera Sensor Networks.
Si, Pengju; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao
2017-08-03
Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder's face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks.
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...
2015-08-13
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
Evaluation of camera-based systems to reduce transit bus side collisions : phase II.
DOT National Transportation Integrated Search
2012-12-01
The sideview camera system has been shown to eliminate blind zones by providing a view to the driver in real time. In : order to provide the best integration of these systems, an integrated camera-mirror system (hybrid system) was : developed and tes...
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
Lytro camera technology: theory, algorithms, performance analysis
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio
2013-03-01
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
NASA Astrophysics Data System (ADS)
Sun, Jiwen; Wei, Ling; Fu, Danying
2002-01-01
resolution and wide swath. In order to assure its high optical precision smoothly passing the rigorous dynamic load of launch, it should be of high structural rigidity. Therefore, a careful study of the dynamic features of the camera structure should be performed. Pro/E. An interference examination is performed on the precise CAD model of the camera for mending the structural design. for the first time in China, and the analysis of structural dynamic of the camera is accomplished by applying the structural analysis code PATRAN and NASTRAN. The main research programs include: 1) the comparative calculation of modes analysis of the critical structure of the camera is achieved by using 4 nodes and 10 nodes tetrahedral elements respectively, so as to confirm the most reasonable general model; 2) through the modes analysis of the camera from several cases, the inherent frequencies and modes are obtained and further the rationality of the structural design of the camera is proved; 3) the static analysis of the camera under self gravity and overloads is completed and the relevant deformation and stress distributions are gained; 4) the response calculation of sine vibration of the camera is completed and the corresponding response curve and maximum acceleration response with corresponding frequencies are obtained. software technique is accurate and efficient. sensitivity, the dynamic design and engineering optimization of the critical structure of the camera are discussed. fundamental technology in design of forecoming space optical instruments.
Airflow analyses using thermal imaging in Arizona's Meteor Crater as part of METCRAX II
NASA Astrophysics Data System (ADS)
Grudzielanek, A. Martina; Vogt, Roland; Cermak, Jan; Maric, Mateja; Feigenwinter, Iris; Whiteman, C. David; Lehner, Manuela; Hoch, Sebastian W.; Krauß, Matthias G.; Bernhofer, Christian; Pitacco, Andrea
2016-04-01
In October 2013 the second Meteor Crater Experiment (METCRAX II) took place at the Barringer Meteorite Crater (aka Meteor Crater) in north central Arizona, USA. Downslope-windstorm-type flows (DWF), the main research objective of METCRAX II, were measured by a comprehensive set of meteorological sensors deployed in and around the crater. During two weeks of METCRAX II five infrared (IR) time lapse cameras (VarioCAM® hr research & VarioCAM® High Definition, InfraTec) were installed at various locations on the crater rim to record high-resolution images of the surface temperatures within the crater from different viewpoints. Changes of surface temperature are indicative of air temperature changes induced by flow dynamics inside the crater, including the DWF. By correlating thermal IR surface temperature data with meteorological sensor data during intensive observational periods the applicability of the IR method of representing flow dynamics can be assessed. We present evaluation results and draw conclusions relative to the application of this method for observing air flow dynamics in the crater. In addition we show the potential of the IR method for METCRAX II in 1) visualizing airflow processes to improve understanding of these flows, and 2) analyzing cold-air flows and cold-air pooling.
High-resolution hyperspectral ground mapping for robotic vision
NASA Astrophysics Data System (ADS)
Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich
2018-04-01
Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.
Depth Perception In Remote Stereoscopic Viewing Systems
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Von Sydow, Marika
1989-01-01
Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.
Speckle imaging for planetary research
NASA Technical Reports Server (NTRS)
Nisenson, P.; Goody, R.; Apt, J.; Papaliolios, C.
1983-01-01
The present study of speckle imaging technique effectiveness encompasses image reconstruction by means of a division algorithm for Fourier amplitudes, and the Knox-Thompson (1974) algorithm for Fourier phases. Results which have been obtained for Io, Titan, Pallas, Jupiter and Uranus indicate that spatial resolutions lower than the seeing limit by a factor of four are obtainable for objects brighter than Uranus. The resolutions obtained are well above the diffraction limit, due to inadequacies of the video camera employed. A photon-counting camera has been developed to overcome these difficulties, making possible the diffraction-limited resolution of objects as faint as Charon.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
A novel SPECT camera for molecular imaging of the prostate
NASA Astrophysics Data System (ADS)
Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir
2011-10-01
The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
Compact full-motion video hyperspectral cameras: development, image processing, and applications
NASA Astrophysics Data System (ADS)
Kanaev, A. V.
2015-10-01
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.
Concept and integration of an on-line quasi-operational airborne hyperspectral remote sensing system
NASA Astrophysics Data System (ADS)
Schilling, Hendrik; Lenz, Andreas; Gross, Wolfgang; Perpeet, Dominik; Wuttke, Sebastian; Middelmann, Wolfgang
2013-10-01
Modern mission characteristics require the use of advanced imaging sensors in reconnaissance. In particular, high spatial and high spectral resolution imaging provides promising data for many tasks such as classification and detecting objects of military relevance, such as camouflaged units or improvised explosive devices (IEDs). Especially in asymmetric warfare with highly mobile forces, intelligence, surveillance and reconnaissance (ISR) needs to be available close to real-time. This demands the use of unmanned aerial vehicles (UAVs) in combination with downlink capability. The system described in this contribution is integrated in a wing pod for ease of installation and calibration. It is designed for the real-time acquisition and analysis of hyperspectral data. The main component is a Specim AISA Eagle II hyperspectral sensor, covering the visible and near-infrared (VNIR) spectral range with a spectral resolution up to 1.2 nm and 1024 pixel across track, leading to a ground sampling distance below 1 m at typical altitudes. The push broom characteristic of the hyperspectral sensor demands an inertial navigation system (INS) for rectification and georeferencing of the image data. Additional sensors are a high resolution RGB (HR-RGB) frame camera and a thermal imaging camera. For on-line application, the data is preselected, compressed and transmitted to the ground control station (GCS) by an existing system in a second wing pod. The final result after data processing in the GCS is a hyperspectral orthorectified GeoTIFF, which is filed in the ERDAS APOLLO geographical information system. APOLLO allows remote access to the data and offers web-based analysis tools. The system is quasi-operational and was successfully tested in May 2013 in Bremerhaven, Germany.
First photometric properties of Dome C, Antarctica
NASA Astrophysics Data System (ADS)
Chadid, M.; Vernin, J.; Jeanneaux, F.; Mekarnia, D.; Trinquet, H.
2008-07-01
Here we present the first photometric extinction measurements in the visible range performed at Dome C in Antarctica, using PAIX photometer (Photometer AntarctIca eXtinction). It is made with "off the shelf" components, Audine camera at the focus of Blazhko telescope, a Meade M16 diaphragmed down to 15 cm. For an exposure time of 60 s without filter, a 10th V-magnitude star is measured with a precision of 1/100 mag. A first statistics over 16 nights in August 2007 leads to a 0.5 magnitude per air mass extinction, may be due to high altitude cirrus. This rather simple experiment shows that continuous observations can be performed at Dome C, allowing high frequency resolution on pulsation and asteroseismology studies. Light curves of one of RR Lyrae stars: SAra were established. They show the typical trend of a RRLyrae star. A recent sophisticated photometer, PAIX II, has been installed recently at Dome C during polar summer 2008, with a ST10 XME camera, automatic guiding, auto focusing and Johnson/Bessel UBVRI filter wheels.
Underwater image mosaicking and visual odometry
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott
2017-05-01
This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Clinical applications with the HIDAC positron camera
NASA Astrophysics Data System (ADS)
Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.
1988-06-01
A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation, and more detailed data on a larger number of clinical and experimental PET scans will be necessary for definitive evaluation. Nevertheless, the HIDAC positron camera may be used for clinical PET imaging in well-defined patient cases, particularly in situations where both high spatial resolution is desired in the reconstructed image of the examined pathological condition and at the same time "static" PET imaging may be adequate, as is the case in thyroid-, ENT- and liver tomographic imaging using the HIDAC positron camera.
Noise and sensitivity of x-ray framing cameras at Nike (abstract)
NASA Astrophysics Data System (ADS)
Pawley, C. J.; Deniz, A. V.; Lehecka, T.
1999-01-01
X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.
1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second
NASA Astrophysics Data System (ADS)
Glenn, William E.; Marcinka, John W.
1998-09-01
For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.
Mini gamma camera, camera system and method of use
Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.
2001-01-01
A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.
Wong, Wai-Hoi; Li, Hongdi; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio A; Liu, Shitao; Wang, Chao; An, Shaohui
2012-11-01
The dedicated murine PET (MuPET) scanner is a high-resolution, high-sensitivity, and low-cost preclinical PET camera designed and manufactured at our laboratory. In this article, we report its performance according to the NU 4-2008 standards of the National Electrical Manufacturers Association (NEMA). We also report the results of additional phantom and mouse studies. The MuPET scanner, which is integrated with a CT camera, is based on the photomultiplier-quadrant-sharing concept and comprises 180 blocks of 13 × 13 lutetium yttrium oxyorthosilicate crystals (1.24 × 1.4 × 9.5 mm(3)) and 210 low-cost 19-mm photomultipliers. The camera has 78 detector rings, with an 11.6-cm axial field of view and a ring diameter of 16.6 cm. We measured the energy resolution, scatter fraction, sensitivity, spatial resolution, and counting rate performance of the scanner. In addition, we scanned the NEMA image-quality phantom, Micro Deluxe and Ultra-Micro Hot Spot phantoms, and 2 healthy mice. The system average energy resolution was 14% at 511 keV. The average spatial resolution at the center of the field of view was about 1.2 mm, improving to 0.8 mm and remaining below 1.2 mm in the central 6-cm field of view when a resolution-recovery method was used. The absolute sensitivity of the camera was 6.38% for an energy window of 350-650 keV and a coincidence timing window of 3.4 ns. The system scatter fraction was 11.9% for the NEMA mouselike phantom and 28% for the ratlike phantom. The maximum noise-equivalent counting rate was 1,100 at 57 MBq for the mouselike phantom and 352 kcps at 65 MBq for the ratlike phantom. The 1-mm fillable rod was clearly observable using the NEMA image-quality phantom. The images of the Ultra-Micro Hot Spot phantom also showed the 1-mm hot rods. In the mouse studies, both the left and right ventricle walls were clearly observable, as were the Harderian glands. The MuPET camera has excellent resolution, sensitivity, counting rate, and imaging performance. The data show it is a powerful scanner for preclinical animal study and pharmaceutical development.
Experimental comparison of high-density scintillators for EMCCD-based gamma ray imaging
NASA Astrophysics Data System (ADS)
Heemskerk, Jan W. T.; Kreuger, Rob; Goorden, Marlies C.; Korevaar, Marc A. N.; Salvador, Samuel; Seeley, Zachary M.; Cherepy, Nerine J.; van der Kolk, Erik; Payne, Stephen A.; Dorenbos, Pieter; Beekman, Freek J.
2012-07-01
Detection of x-rays and gamma rays with high spatial resolution can be achieved with scintillators that are optically coupled to electron-multiplying charge-coupled devices (EMCCDs). These can be operated at typical frame rates of 50 Hz with low noise. In such a set-up, scintillation light within each frame is integrated after which the frame is analyzed for the presence of scintillation events. This method allows for the use of scintillator materials with relatively long decay times of a few milliseconds, not previously considered for use in photon-counting gamma cameras, opening up an unexplored range of dense scintillators. In this paper, we test CdWO4 and transparent polycrystalline ceramics of Lu2O3:Eu and (Gd,Lu)2O3:Eu as alternatives to currently used CsI:Tl in order to improve the performance of EMCCD-based gamma cameras. The tested scintillators were selected for their significantly larger cross-sections at 140 keV (99mTc) compared to CsI:Tl combined with moderate to good light yield. A performance comparison based on gamma camera spatial and energy resolution was done with all tested scintillators having equal (66%) interaction probability at 140 keV. CdWO4, Lu2O3:Eu and (Gd,Lu)2O3:Eu all result in a significantly improved spatial resolution over CsI:Tl, albeit at the cost of reduced energy resolution. Lu2O3:Eu transparent ceramic gives the best spatial resolution: 65 µm full-width-at-half-maximum (FWHM) compared to 147 µm FWHM for CsI:Tl. In conclusion, these ‘slow’ dense scintillators open up new possibilities for improving the spatial resolution of EMCCD-based scintillation cameras.
Resolution for color photography
NASA Astrophysics Data System (ADS)
Hubel, Paul M.; Bautsch, Markus
2006-02-01
Although it is well known that luminance resolution is most important, the ability to accurately render colored details, color textures, and colored fabrics cannot be overlooked. This includes the ability to accurately render single-pixel color details as well as avoiding color aliasing. All consumer digital cameras on the market today record in color and the scenes people are photographing are usually color. Yet almost all resolution measurements made on color cameras are done using a black and white target. In this paper we present several methods for measuring and quantifying color resolution. The first method, detailed in a previous publication, uses a slanted-edge target of two colored surfaces in place of the standard black and white edge pattern. The second method employs the standard black and white targets recommended in the ISO standard, but records these onto the camera through colored filters thus giving modulation between black and one particular color component; red, green, and blue color separation filters are used in this study. The third method, conducted at Stiftung Warentest, an independent consumer organization of Germany, uses a whitelight interferometer to generate fringe pattern targets of varying color and spatial frequency.
NASA Astrophysics Data System (ADS)
Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute
1998-04-01
Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
First results from the TOPSAT camera
NASA Astrophysics Data System (ADS)
Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve
2017-11-01
The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.
Split ring resonator based THz-driven electron streak camera featuring femtosecond resolution
Fabiańska, Justyna; Kassier, Günther; Feurer, Thomas
2014-01-01
Through combined three-dimensional electromagnetic and particle tracking simulations we demonstrate a THz driven electron streak camera featuring a temporal resolution on the order of a femtosecond. The ultrafast streaking field is generated in a resonant THz sub-wavelength antenna which is illuminated by an intense single-cycle THz pulse. Since electron bunches and THz pulses are generated with parts of the same laser system, synchronization between the two is inherently guaranteed. PMID:25010060
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemaire, H.; Barat, E.; Carrel, F.
In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
Dense Region of Impact Craters
2011-09-23
NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.
Camera Ready to Install on Mars Reconnaissance Orbiter
2005-01-07
A telescopic camera called the High Resolution Imaging Science Experiment, or HiRISE, right was installed onto the main structure of NASA Mars Reconnaissance Orbiter left on Dec. 11, 2004 at Lockheed Martin Space Systems, Denver.
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy
NASA Astrophysics Data System (ADS)
Hewat, A. W.
We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.
A Normal Incidence X-ray Telescope (NIXT) sounding rocket payload
NASA Technical Reports Server (NTRS)
Golub, Leon
1989-01-01
Work on the High Resolution X-ray (HRX) Detector Program is described. In the laboratory and flight programs, multiple copies of a general purpose set of electronics which control the camera, signal processing and data acquisition, were constructed. A typical system consists of a phosphor convertor, image intensifier, a fiber optics coupler, a charge coupled device (CCD) readout, and a set of camera, signal processing and memory electronics. An initial rocket detector prototype camera was tested in flight and performed perfectly. An advanced prototype detector system was incorporated on another rocket flight, in which a high resolution heterojunction vidicon tube was used as the readout device for the H(alpha) telescope. The camera electronics for this tube were built in-house and included in the flight electronics. Performance of this detector system was 100 percent satisfactory. The laboratory X-ray system for operation on the ground is also described.
Development of the geoCamera, a System for Mapping Ice from a Ship
NASA Astrophysics Data System (ADS)
Arsenault, R.; Clemente-Colon, P.
2012-12-01
The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.
Electronic cameras for low-light microscopy.
Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith
2013-01-01
This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.
Coincidence electron/ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin
2015-05-01
A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.
The Si/CdTe semiconductor Compton camera of the ASTRO-H Soft Gamma-ray Detector (SGD)
NASA Astrophysics Data System (ADS)
Watanabe, Shin; Tajima, Hiroyasu; Fukazawa, Yasushi; Ichinohe, Yuto; Takeda, Shin`ichiro; Enoto, Teruaki; Fukuyama, Taro; Furui, Shunya; Genba, Kei; Hagino, Kouichi; Harayama, Atsushi; Kuroda, Yoshikatsu; Matsuura, Daisuke; Nakamura, Ryo; Nakazawa, Kazuhiro; Noda, Hirofumi; Odaka, Hirokazu; Ohta, Masayuki; Onishi, Mitsunobu; Saito, Shinya; Sato, Goro; Sato, Tamotsu; Takahashi, Tadayuki; Tanaka, Takaaki; Togo, Atsushi; Tomizuka, Shinji
2014-11-01
The Soft Gamma-ray Detector (SGD) is one of the instrument payloads onboard ASTRO-H, and will cover a wide energy band (60-600 keV) at a background level 10 times better than instruments currently in orbit. The SGD achieves low background by combining a Compton camera scheme with a narrow field-of-view active shield. The Compton camera in the SGD is realized as a hybrid semiconductor detector system which consists of silicon and cadmium telluride (CdTe) sensors. The design of the SGD Compton camera has been finalized and the final prototype, which has the same configuration as the flight model, has been fabricated for performance evaluation. The Compton camera has overall dimensions of 12 cm×12 cm×12 cm, consisting of 32 layers of Si pixel sensors and 8 layers of CdTe pixel sensors surrounded by 2 layers of CdTe pixel sensors. The detection efficiency of the Compton camera reaches about 15% and 3% for 100 keV and 511 keV gamma rays, respectively. The pixel pitch of the Si and CdTe sensors is 3.2 mm, and the signals from all 13,312 pixels are processed by 208 ASICs developed for the SGD. Good energy resolution is afforded by semiconductor sensors and low noise ASICs, and the obtained energy resolutions with the prototype Si and CdTe pixel sensors are 1.0-2.0 keV (FWHM) at 60 keV and 1.6-2.5 keV (FWHM) at 122 keV, respectively. This results in good background rejection capability due to better constraints on Compton kinematics. Compton camera energy resolutions achieved with the final prototype are 6.3 keV (FWHM) at 356 keV and 10.5 keV (FWHM) at 662 keV, which satisfy the instrument requirements for the SGD Compton camera (better than 2%). Moreover, a low intrinsic background has been confirmed by the background measurement with the final prototype.
Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping
NASA Astrophysics Data System (ADS)
Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.
2016-06-01
High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.
Super-resolved all-refocused image with a plenoptic camera
NASA Astrophysics Data System (ADS)
Wang, Xiang; Li, Lin; Hou, Guangqi
2015-12-01
This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.
Earth Observations taken by the Expedition 14 crew
2006-11-07
ISS014-E-07480 (11 Nov. 2006) --- Dyess Air Force Base is featured in this image photographed by an Expedition 14 crewmember on the International Space Station. Dyess Air Force Base, located near the central Texas city of Abilene, is the home of the 7th Bomb Wing and 317th Airlift Groups of the United States Air Force. The Base also conducts all initial Air Force combat crew training for the B-1B Lancer aircraft. The main runway is approximately 5 kilometers in length to accommodate the large bombers and cargo aircraft at the base -- many of which are parked in parallel rows on the base tarmac. Lieutenant Colonel William E. Dyess, for whom the base is named, was a highly decorated pilot, squadron commander, and prisoner of war during World War II. The nearby town of Tye, Texas was established by the Texas and Pacific Railway in 1881, and expanded considerably following reactivation of a former air field as Dyess Air Force Base in 1956. Airfields and airports are useful sites for astronauts to hone their long camera lens photographic technique to acquire high resolution images. The sharp contrast between highly reflective linear features, such as runways, with darker agricultural fields and undisturbed land allows fine focusing of the cameras. This on-the-job training is key for obtaining high resolution imagery of Earth, as well as acquiring inspection photographs of space shuttle thermal protection tiles during continuing missions to the International Space Station.
NASA Astrophysics Data System (ADS)
Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier
2016-12-01
The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the NO2 interference with the SO2 spectrum.
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
PN-CCD camera for XMM: performance of high time resolution/bright source operating modes
NASA Astrophysics Data System (ADS)
Kendziorra, Eckhard; Bihler, Edgar; Grubmiller, Willy; Kretschmar, Baerbel; Kuster, Markus; Pflueger, Bernhard; Staubert, Ruediger; Braeuninger, Heinrich W.; Briel, Ulrich G.; Meidinger, Norbert; Pfeffermann, Elmar; Reppin, Claus; Stoetter, Diana; Strueder, Lothar; Holl, Peter; Kemmer, Josef; Soltau, Heike; von Zanthier, Christoph
1997-10-01
The pn-CCD camera is developed as one of the focal plane instruments for the European photon imaging camera (EPIC) on board the x-ray multi mirror (XMM) mission to be launched in 1999. The detector consists of four quadrants of three pn-CCDs each, which are integrated on one silicon wafer. Each CCD has 200 by 64 pixels (150 micrometer by 150 micrometers) with 280 micrometers depletion depth. One CCD of a quadrant is read out at a time, while the four quadrants can be processed independently of each other. In standard imaging mode the CCDs are read out sequentially every 70 ms. Observations of point sources brighter than 1 mCrab will be effected by photon pile- up. However, special operating modes can be used to observe bright sources up to 150 mCrab in timing mode with 30 microseconds time resolution and very bright sources up to several crab in burst mode with 7 microseconds time resolution. We have tested one quadrant of the EPIC pn-CCD camera at line energies from 0.52 keV to 17.4 keV at the long beam test facility Panter in the focus of the qualification mirror module for XMM. In order to test the time resolution of the system, a mechanical chopper was used to periodically modulate the beam intensity. Pulse periods down to 0.7 ms were generated. This paper describes the performance of the pn-CCD detector in timing and burst readout modes with special emphasis on energy and time resolution.
NASA Technical Reports Server (NTRS)
Grubbs, Rodney
2016-01-01
The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.
Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera
2016-01-01
Summary: Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon’s perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon’s perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504
Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment
NASA Astrophysics Data System (ADS)
Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.
2016-06-01
Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.
Design principles and applications of a cooled CCD camera for electron microscopy.
Faruqi, A R
1998-01-01
Cooled CCD cameras offer a number of advantages in recording electron microscope images with CCDs rather than film which include: immediate availability of the image in a digital format suitable for further computer processing, high dynamic range, excellent linearity and a high detective quantum efficiency for recording electrons. In one important respect however, film has superior properties: the spatial resolution of CCD detectors tested so far (in terms of point spread function or modulation transfer function) are inferior to film and a great deal of our effort has been spent in designing detectors with improved spatial resolution. Various instrumental contributions to spatial resolution have been analysed and in this paper we discuss the contribution of the phosphor-fibre optics system in this measurement. We have evaluated the performance of a number of detector components and parameters, e.g. different phosphors (and a scintillator), optical coupling with lens or fibre optics with various demagnification factors, to improve the detector performance. The camera described in this paper, which is based on this analysis, uses a tapered fibre optics coupling between the phosphor and the CCD and is installed on a Philips CM12 electron microscope equipped to perform cryo-microscopy. The main use of the camera so far has been in recording electron diffraction patterns from two dimensional crystals of bacteriorhodopsin--from wild type and from different trapped states during the photocycle. As one example of the type of data obtained with the CCD camera a two dimensional Fourier projection map from the trapped O-state is also included. With faster computers, it will soon be possible to undertake this type of work on an on-line basis. Also, with improvements in detector size and resolution, CCD detectors, already ideal for diffraction, will be able to compete with film in the recording of high resolution images.
Rapid orthophoto development system.
DOT National Transportation Integrated Search
2013-06-01
The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...
Soliman, Mohamed Kamel; Sadiq, Mohammad Ali; Agarwal, Aniruddha; Sarwar, Salman; Hassan, Muhammad; Hanout, Mostafa; Graf, Frank; High, Robin; Do, Diana V; Nguyen, Quan Dong; Sepah, Yasir J
2016-01-01
To assess cone density as a marker of early signs of retinopathy in patients with type II diabetes mellitus. An adaptive optics (AO) retinal camera (rtx1™; Imagine Eyes, Orsay, France) was used to acquire images of parafoveal cones from patients with type II diabetes mellitus with or without retinopathy and from healthy controls with no known systemic or ocular disease. Cone mosaic was captured at 0° and 2°eccentricities along the horizontal and vertical meridians. The density of the parafoveal cones was calculated within 100×100-μm squares located at 500-μm from the foveal center along the orthogonal meridians. Manual corrections of the automated counting were then performed by 2 masked graders. Cone density measurements were evaluated with ANOVA that consisted of one between-subjects factor, stage of retinopathy and the within-subject factors. The ANOVA model included a complex covariance structure to account for correlations between the levels of the within-subject factors. Ten healthy participants (20 eyes) and 25 patients (29 eyes) with type II diabetes mellitus were recruited in the study. The mean (± standard deviation [SD]) age of the healthy participants (Control group), patients with diabetes without retinopathy (No DR group), and patients with diabetic retinopathy (DR group) was 55 ± 8, 53 ± 8, and 52 ± 9 years, respectively. The cone density was significantly lower in the moderate nonproliferative diabetic retinopathy (NPDR) and severe NPDR/proliferative DR groups compared to the Control, No DR, and mild NPDR groups (P < 0.05). No correlation was found between cone density and the level of hemoglobin A1c (HbA1c) or the duration of diabetes. The extent of photoreceptor loss on AO imaging may correlate positively with severity of DR in patients with type II diabetes mellitus. Photoreceptor loss may be more pronounced among patients with advanced stages of DR due to higher risk of macular edema and its sequelae.
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data
NASA Technical Reports Server (NTRS)
Chandler, Gyanesh
2007-01-01
CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).
NASA Technical Reports Server (NTRS)
Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.
1994-01-01
Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.
Spectral types for objects in the Kiso survey. IV - Data for 81 stars
NASA Technical Reports Server (NTRS)
Wegner, Gary; Mcmahan, Robert K.
1988-01-01
Spectroscopy and spectral types for 81 ultraviolet-excess objects found in the Kiso Schmidt-camera survey are reported. The data were secured with the McGraw-Hill 1.3 m telescope at 8-A resolution covering the wavelength interval 4000 -7200 A using the Mark II spectrograph. Descriptions of the spectra of some of the more peculiar objects found in this sample are given and include 14 sub-dwarfs, 23 definite DA white dwarfs, including a magnetic one, and one DQ whie dwarf, eight quasars and emission-line objects, and a new composite DA + dM system. More spectroscopy of the new cataclysmic variable KUV 01584-0939 and a possibly related object is also described.
Discovery of KPS-1b, a Transiting Hot-Jupiter, with an Amateur Telescope Setup (Abstract)
NASA Astrophysics Data System (ADS)
Benni, P.; Burdanov, A.; Krushinsky, V.; Sokov, E.
2018-06-01
(Abstract only) Using readily available amateur equipment, a wide-field telescope (Celestron RASA, 279 mm f/2.2) coupled with a SBIG ST-8300M camera was set up at a private residence in a fairly light polluted suburban town thirty miles outside of Boston, Massachusetts. This telescope participated in the Kourovka Planet Search (KPS) prototype survey, along with a MASTER-II Ural wide field telescope near Yekaterinburg, Russia. One goal was to determine if higher resolution imaging ( 2 arcsec/pixel) with much lower sky coverage can practically detect exoplanet transits compared to the successful very wide-field exoplanet surveys (KELT, XO, WASP, HATnet, TrES, Qatar, etc.) which used an array of small aperture telescopes coupled to CCDs.
First light observations with TIFR Near Infrared Imaging Camera (TIRCAM-II)
NASA Astrophysics Data System (ADS)
Ojha, D. K.; Ghosh, S. K.; D'Costa, S. L. A.; Naik, M. B.; Sandimani, P. R.; Poojary, S. S.; Bhagat, S. B.; Jadhav, R. B.; Meshram, G. S.; Bakalkar, C. B.; Ramaprakash, A. N.; Mohan, V.; Joshi, J.
TIFR near infrared imaging camera (TIRCAM-II) is based on the Aladdin III Quadrant InSb focal plane array (512×512 pixels; 27.6 μm pixel size; sensitive between 1 - 5.5 μm). TIRCAM-II had its first engineering run with the 2 m IUCAA telescope at Girawali during February - March 2011. The first light observations with TIRCAM-II were quite successful. Several infrared standard with TIRCAM-II were quite successful. Several infrared standard stars, the Trapezium Cluster in Orion region, McNeil's nebula, etc., were observed in the J, K and in a narrow-band at 3.6 μm (nbL). In the nbL band, some bright stars could be detected from the Girawali site. The performance of TIRCAM-II is discussed in the light of preliminary observations in near infrared bands.
Low-cost mobile phone microscopy with a reversed mobile phone camera lens.
Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.
Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens
Fletcher, Daniel A.
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples. PMID:24854188
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacPhee, A. G., E-mail: macphee2@llnl.gov; Hatch, B. W.; Bell, P. M.
2016-11-15
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamicmore » range for the relevant part of the streak record.« less
MacPhee, A G; Dymoke-Bradshaw, A K L; Hares, J D; Hassett, J; Hatch, B W; Meadowcroft, A L; Bell, P M; Bradley, D K; Datte, P S; Landen, O L; Palmer, N E; Piston, K W; Rekow, V V; Hilsabeck, T J; Kilkenny, J D
2016-11-01
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamic range for the relevant part of the streak record.
A neutron camera system for MAST.
Cecconello, M; Turnyanskiy, M; Conroy, S; Ericsson, G; Ronchi, E; Sangaroon, S; Akers, R; Fitzgerald, I; Cullen, A; Weiszflog, M
2010-10-01
A prototype neutron camera has been developed and installed at MAST as part of a feasibility study for a multichord neutron camera system with the aim to measure the spatial and time resolved 2.45 MeV neutron emissivity profile. Liquid scintillators coupled to a fast digitizer are used for neutron/gamma ray digital pulse shape discrimination. The preliminary results obtained clearly show the capability of this diagnostic to measure neutron emissivity profiles with sufficient time resolution to study the effect of fast ion loss and redistribution due to magnetohydrodynamic activity. A minimum time resolution of 2 ms has been achieved with a modest 1.5 MW of neutral beam injection heating with a measured neutron count rate of a few 100 kHz.
Colors of active regions on comet 67P
NASA Astrophysics Data System (ADS)
Oklay, N.; Vincent, J.-B.; Sierks, H.; Besse, S.; Fornasier, S.; Barucci, M. A.; Lara, L.; Scholten, F.; Preusker, F.; Lazzarin, M.; Pajola, M.; La Forgia, F.
2015-10-01
The OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) scientific imager (Keller et al. 2007) is successfully delivering images of comet 67P/Churyumov-Gerasimenko from its both wide angle camera (WAC) and narrow angle camera (NAC) since ESA's spacecraft Rosetta's arrival to the comet. Both cameras are equipped with filters covering the wavelength range of about 200 nm to 1000 nm. The comet nucleus is mapped with different combination of the filters in resolutions up to 15 cm/px. Besides the determination of the surface morphology in great details (Thomas et al. 2015), such high resolution images provided us a mean to unambiguously link some activity in the coma to a series of pits on the nucleus surface (Vincent et al. 2015).
New concept high-speed and high-resolution color scanner
NASA Astrophysics Data System (ADS)
Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya
2003-05-01
We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
The prototype cameras for trans-Neptunian automatic occultation survey
NASA Astrophysics Data System (ADS)
Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul
2016-08-01
The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.
Advances in Low-Temperature Tungsten Spectroscopy Capability to Quantify DIII-D Divertor Erosion
Abrams, Tyler; Thomas, Daniel M.; Unterberg, Ezekial A.; ...
2018-01-05
Recent emphasis of tungsten (W) plasma-materialsinteractions (PMI) experiments on DIII-D has made it essential to enhance the W I and W II measurement capabilities of its spectroscopy diagnostic suite to acquire W sourcing measurements with high temporal, spatial, and wavelength resolution. To this end, four new viewing chords for the Multichordal Divertor Spectrometer (MDS) and diverter filterscope systems were installed, leading to a 7x increase in blue light sensitivity. W I and low-Z impurity line identifications were performed in the 3995-4030 Å region, placing wavelengths within 0.1 Å of the NIST values. A novel method was also developed for themore » DIII-D high temporal resolution filterscopes to distinguish between W I light and background contamination, important due to the relatively weak intensity of this line, using two different bandpass filters width different widths but the same center wavelength. Lastly, fast imaging of the W I 4008.75 Å spectral line with a PCO Pixelfly VGA 200/205 camera allowed for discrimination between ELMy and intra-ELM W I emission profiles with very high (~1 mm) spatial resolution.« less
Advances in Low-Temperature Tungsten Spectroscopy Capability to Quantify DIII-D Divertor Erosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrams, Tyler; Thomas, Daniel M.; Unterberg, Ezekial A.
Recent emphasis of tungsten (W) plasma-materialsinteractions (PMI) experiments on DIII-D has made it essential to enhance the W I and W II measurement capabilities of its spectroscopy diagnostic suite to acquire W sourcing measurements with high temporal, spatial, and wavelength resolution. To this end, four new viewing chords for the Multichordal Divertor Spectrometer (MDS) and diverter filterscope systems were installed, leading to a 7x increase in blue light sensitivity. W I and low-Z impurity line identifications were performed in the 3995-4030 Å region, placing wavelengths within 0.1 Å of the NIST values. A novel method was also developed for themore » DIII-D high temporal resolution filterscopes to distinguish between W I light and background contamination, important due to the relatively weak intensity of this line, using two different bandpass filters width different widths but the same center wavelength. Lastly, fast imaging of the W I 4008.75 Å spectral line with a PCO Pixelfly VGA 200/205 camera allowed for discrimination between ELMy and intra-ELM W I emission profiles with very high (~1 mm) spatial resolution.« less
Electronic Still Camera Project on STS-48
NASA Technical Reports Server (NTRS)
1991-01-01
On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
Research relative to high resolution camera on the advanced X-ray astrophysics facility
NASA Technical Reports Server (NTRS)
1986-01-01
The HRC (High Resolution Camera) is a photon counting instrument to be flown on the Advanced X-Ray Astrophysics Facility (AXAF). It is a large field of view, high angular resolution, detector for the x-ray telescope. The HRC consists of a CsI coated microchannel plate (MCP) acting as a soft x-ray photocathode, followed by a second MCP for high electronic gain. The MCPs are readout by a crossed grid of resistively coupled wires to provide high spatial resolution along with timing and pulse height data. The instrument will be used in two modes, as a direct imaging detector with a limiting sensitivity of 10 to the -15 ergs sq cm sec in a 10 to the 5th second exposure, and as a readout for an objective transmission grating providing spectral resolution of several hundreds to thousands.
Object recognition through turbulence with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher
2015-03-01
Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.
Improved scintimammography using a high-resolution camera mounted on an upright mammography gantry
NASA Astrophysics Data System (ADS)
Itti, Emmanuel; Patt, Bradley E.; Diggles, Linda E.; MacDonald, Lawrence; Iwanczyk, Jan S.; Mishkin, Fred S.; Khalkhali, Iraj
2003-01-01
99mTc-sestamibi scintimammography (SMM) is a useful adjunct to conventional X-ray mammography (XMM) for the assessment of breast cancer. An increasing number of studies has emphasized fair sensitivity values for the detection of tumors >1 cm, compared to XMM, particularly in situations where high glandular breast densities make mammographic interpretation difficult. In addition, SMM has demonstrated high specificity for cancer, compared to various functional and anatomic imaging modalities. However, large field-of-view (FOV) gamma cameras are difficult to position close to the breasts, which decreases spatial resolution and subsequently, the sensitivity of detection for tumors <1 cm. New dedicated detectors featuring small FOV and increased spatial resolution have recently been developed. In this setting, improvement in tumor detection sensitivity, particularly with regard to small cancers is expected. At Division of Nuclear Medicine, Harbor-UCLA Medical Center, we have performed over 2000 SMM within the last 9 years. We have recently used a dedicated breast camera (LumaGEM™) featuring a 12.8×12.8 cm 2 FOV and an array of 2×2×6 mm 3 discrete crystals coupled to a photon-sensitive photomultiplier tube readout. This camera is mounted on a mammography gantry allowing upright imaging, medial positioning and use of breast compression. Preliminary data indicates significant enhancement of spatial resolution by comparison with standard imaging in the first 10 patients. Larger series will be needed to conclude on sensitivity/specificity issues.
Real-time imaging of methane gas leaks using a single-pixel camera.
Gibson, Graham M; Sun, Baoqing; Edgar, Matthew P; Phillips, David B; Hempler, Nils; Maker, Gareth T; Malcolm, Graeme P A; Padgett, Miles J
2017-02-20
We demonstrate a camera which can image methane gas at video rates, using only a single-pixel detector and structured illumination. The light source is an infrared laser diode operating at 1.651μm tuned to an absorption line of methane gas. The light is structured using an addressable micromirror array to pattern the laser output with a sequence of Hadamard masks. The resulting backscattered light is recorded using a single-pixel InGaAs detector which provides a measure of the correlation between the projected patterns and the gas distribution in the scene. Knowledge of this correlation and the patterns allows an image to be reconstructed of the gas in the scene. For the application of locating gas leaks the frame rate of the camera is of primary importance, which in this case is inversely proportional to the square of the linear resolution. Here we demonstrate gas imaging at ~25 fps while using 256 mask patterns (corresponding to an image resolution of 16×16). To aid the task of locating the source of the gas emission, we overlay an upsampled and smoothed image of the low-resolution gas image onto a high-resolution color image of the scene, recorded using a standard CMOS camera. We demonstrate for an illumination of only 5mW across the field-of-view imaging of a methane gas leak of ~0.2 litres/minute from a distance of ~1 metre.
Improved camera for better X-ray powder photographs
NASA Technical Reports Server (NTRS)
Parrish, W.; Vajda, I. E.
1969-01-01
Camera obtains powder-type photographs of single crystals or polycrystalline powder specimens. X-ray diffraction photographs of a powder specimen are characterized by improved resolution and greater intensity. A reasonably good powder pattern of small samples can be produced for identification purposes.
Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range
NASA Astrophysics Data System (ADS)
Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.
2013-12-01
The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these worst-case resolution measurements, estimating the spatial resolution to be approximately 3.5 μm and 3.0 μm at 530 eV and 680 eV, well below the resolution limit of 5 μm required to improve the spectral resolution by a factor of 2.
The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector
Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...
2014-06-11
We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Expanded opportunities of THz passive camera for the detection of concealed objects
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.
2013-10-01
Among the security problems, the detection of object implanted into either the human body or animal body is the urgent problem. At the present time the main tool for the detection of such object is X-raying only. However, X-ray is the ionized radiation and therefore can not be used often. Other way for the problem solving is passive THz imaging using. In our opinion, using of the passive THz camera may help to detect the object implanted into the human body under certain conditions. The physical reason of such possibility arises from temperature trace on the human skin as a result of the difference in temperature between object and parts of human body. Modern passive THz cameras have not enough resolution in temperature to see this difference. That is why, we use computer processing to enhance the passive THz camera resolution for this application. After computer processing of images captured by passive THz camera TS4, developed by ThruVision Systems Ltd., we may see the pronounced temperature trace on the human body skin from the water, which is drunk by person, or other food eaten by person. Nevertheless, there are many difficulties on the way of full soution of this problem. We illustrate also an improvement of quality of the image captured by comercially available passive THz cameras using computer processing. In some cases, one can fully supress a noise on the image without loss of its quality. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts.
Performance benefits and limitations of a camera network
NASA Astrophysics Data System (ADS)
Carr, Peter; Thomas, Paul J.; Hornsey, Richard
2005-06-01
Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.
The Atlases of Vesta derived from Dawn Framing Camera images
NASA Astrophysics Data System (ADS)
Roatsch, T.; Kersten, E.; Matz, K.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.
2013-12-01
The Dawn Framing Camera acquired during its two HAMO (High Altitude Mapping Orbit) phases in 2011 and 2012 about 6,000 clear filter images with a resolution of about 60 m/pixel. We combined these images in a global ortho-rectified mosaic of Vesta (60 m/pixel resolution). Only very small areas near the northern pole were still in darkness and are missing in the mosaic. The Dawn Framing Camera also acquired about 10,000 high-resolution clear filter images (about 20 m/pixel) of Vesta during its Low Altitude Mapping Orbit (LAMO). Unfortunately, the northern part of Vesta was still in darkness during this phase, good illumination (incidence angle < 70°) was only available for 66.8 % of the surface [1]. We used the LAMO images to calculate another global mosaic of Vesta, this time with 20 m/pixel resolution. Both global mosaics were used to produce atlases of Vesta: a HAMO atlas with 15 tiles at a scale of 1:500,000 and a LAMO atlas with 30 tiles at a scale between 1:200,000 and 1:225,180. The nomenclature used in these atlases is based on names and places historically associated with the Roman goddess Vesta, and is compliant with the rules of the IAU. 65 names for geological features were already approved by the IAU, 39 additional names are currently under review. Selected examples of both atlases will be shown in this presentation. Reference: [1]Roatsch, Th., etal., High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images. Planetary and Space Science (2013), http://dx.doi.org/10.1016/j.pss.2013.06.024i
On the structure of the outer layers of cool carbon stars
NASA Technical Reports Server (NTRS)
Querci, F.; Querci, M.; Wing, R. F.; Cassatella, A.; Heck, A.
1982-01-01
Exposures on the spectra of four late C-type stars have been made with the IUE satellite in the wavelength range of the LWR camera (1900-3200 A). Two Mira variables near maximum light and two semiregular variables were observed. Although the exposure times used, which range up to 240 min in the low-resolution mode, were more than sufficient to record the continuum and emission lines of Mg II, Fe II, and Al II in normal M stars of similar magnitude and temperature, no light was recorded. It is concluded that the far-ultraviolet continuum is strongly depressed in these cool carbon stars. The absence of UV emission lines implies either that the chromospheric lines observed in M stars require an ultraviolet flux for their excitation, or that cool carbon stars have no chromosphere at all or that the opacity source is located above even the emission-line-forming region. This opacity source, which is probably some carbon condensate since it is weak or absent in M stars while absorbing strongly in C stars, is discussed both in terms of the chromospheric interpretation of the emission lines and in terms of their shock-wave interpretation.
CMOS Camera Array With Onboard Memory
NASA Technical Reports Server (NTRS)
Gat, Nahum
2009-01-01
A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.
Solid State Television Camera (CID)
NASA Technical Reports Server (NTRS)
Steele, D. W.; Green, W. T.
1976-01-01
The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.
Photogrammetry of Apollo 15 photography, part C
NASA Technical Reports Server (NTRS)
Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.; Derick, J. L.
1972-01-01
In the Apollo 15 mission, a mapping camera system and a 61 cm optical bar, high resolution panoramic camera, as well as a laser altimeter were used. The panoramic camera is described, having several distortion sources, such as cylindrical shape of the negative film surface, the scanning action of the lens, the image motion compensator, and the spacecraft motion. Film products were processed on a specifically designed analytical plotter.
Return Beam Vidicon (RBV) panchromatic two-camera subsystem for LANDSAT-C
NASA Technical Reports Server (NTRS)
1977-01-01
A two-inch Return Beam Vidicon (RBV) panchromatic two camera Subsystem, together with spare components was designed and fabricated for the LANDSAT-C Satellite; the basis for the design was the Landsat 1&2 RBV Camera System. The purpose of the RBV Subsystem is to acquire high resolution pictures of the Earth for a mapping application. Where possible, residual LANDSAT 1 and 2 equipment was utilized.
A system for extracting 3-dimensional measurements from a stereo pair of TV cameras
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.; Cunningham, R.
1976-01-01
Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Real time moving scene holographic camera system
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1973-01-01
A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Television camera as a scientific instrument
NASA Technical Reports Server (NTRS)
Smokler, M. I.
1970-01-01
Rigorous calibration program, coupled with a sophisticated data-processing program that introduced compensation for system response to correct photometry, geometric linearity, and resolution, converted a television camera to a quantitative measuring instrument. The output data are in the forms of both numeric printout records and photographs.
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Optical performance analysis of plenoptic camera systems
NASA Astrophysics Data System (ADS)
Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas
2014-09-01
Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.
Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) Slit-Jaw Imaging System
NASA Astrophysics Data System (ADS)
Wilkerson, P.; Champey, P. R.; Winebarger, A. R.; Kobayashi, K.; Savage, S. L.
2017-12-01
The Marshall Grazing Incidence X-ray Spectrometer is a NASA sounding rocket payload providing a 0.6 - 2.5 nm spectrum with unprecedented spatial and spectral resolution. The instrument is comprised of a novel optical design, featuring a Wolter1 grazing incidence telescope, which produces a focused solar image on a slit plate, an identical pair of stigmatic optics, a planar diffraction grating and a low-noise detector. When MaGIXS flies on a suborbital launch in 2019, a slit-jaw camera system will reimage the focal plane of the telescope providing a reference for pointing the telescope on the solar disk and aligning the data to supporting observations from satellites and other rockets. The telescope focuses the X-ray and EUV image of the sun onto a plate covered with a phosphor coating that absorbs EUV photons, which then fluoresces in visible light. This 10-week REU project was aimed at optimizing an off-axis mounted camera with 600-line resolution NTSC video for extremely low light imaging of the slit plate. Radiometric calculations indicate an intensity of less than 1 lux at the slit jaw plane, which set the requirement for camera sensitivity. We selected a Watec 910DB EIA charge-coupled device (CCD) monochrome camera, which has a manufacturer quoted sensitivity of 0.0001 lux at F1.2. A high magnification and low distortion lens was then identified to image the slit jaw plane from a distance of approximately 10 cm. With the selected CCD camera, tests show that at extreme low-light levels, we achieve a higher resolution than expected, with only a moderate drop in frame rate. Based on sounding rocket flight heritage, the launch vehicle attitude control system is known to stabilize the instrument pointing such that jitter does not degrade video quality for context imaging. Future steps towards implementation of the imaging system will include ruggedizing the flight camera housing and mounting the selected camera and lens combination to the instrument structure.
Opportunistic traffic sensing using existing video sources (phase II).
DOT National Transportation Integrated Search
2017-02-01
The purpose of the project reported on here was to investigate methods for automatic traffic sensing using traffic surveillance : cameras, red light cameras, and other permanent and pre-existing video sources. Success in this direction would potentia...
Generation of High-Resolution Geo-referenced Photo-Mosaics From Navigation Data
NASA Astrophysics Data System (ADS)
Delaunoy, O.; Elibol, A.; Garcia, R.; Escartin, J.; Fornari, D.; Humphris, S.
2006-12-01
Optical images of the ocean floor are a rich source of data to understand biological and geological processes. However, due to the attenuation of light in sea water, the area covered by the optical systems is very reduced, and a large number of images are then needed in order to cover an area of interest, as individually they do not provide a global view of the surveyed area. Therefore, generating a composite view (or photo-mosaic) from multiple overlapping images is usually the most practical and flexible solution to visually cover a wide area, allowing the analysis of the site in one single representation of the ocean floor. In most of the camera surveys which are carried out nowadays, some sort of positioning information is available (e.g., USBL, DVL, INS, gyros, etc). If it is a towed camera an estimation of the length of the tether and the mother ship GPS reading could also serve as navigation data. In any case, a photo-mosaic can be build just by taking into account the position and orientation of the camera. On the other hand, most of the regions of interest for the scientific community are quite large (>1Km2) and since better resolution is always required, the final photo-mosaic can be very large (>1,000,000 × 1,000,000 pixels), and cannot be handled by commonly available software. For this reason, we have developed a software package able to load a navigation file and the sequence of acquired images to automatically build a geo-referenced mosaic. This navigated mosaic provides a global view of the interest site, at the maximum available resolution. The developed package includes a viewer, allowing the user to load, view and annotate these geo-referenced photo-mosaics on a personal computer. A software library has been developed to allow the viewer to manage such very big images. Therefore, the size of the resulting mosaic is now only limited by the size of the hard drive. Work is being carried out to apply image processing techniques to the navigated mosaic, with the intention of locally improving image alignment. Tests have been conducted using the data acquired during the cruise LUSTRE'96 (LUcky STRike Exploration, 37°17'N 32°17'W) by WHOI. During this cruise, the ARGO-II tethered vehicle acquired ~21,000 images in a ~1Km2 area of the seafloor to map at high resolution the geology of this hydrothermal field. The obtained geo-referenced photo-mosaic has a resolution of 1.5cm per pixel, with a coverage of ~25% of the Lucky Strike area. Data and software will be made publicly available.
Penna, Rachele R; de Sanctis, Ugo; Catalano, Martina; Brusasco, Luca; Grignolo, Federico M
2017-01-01
To compare the repeatability/reproducibility of measurement by high-resolution Placido disk-based topography with that of a high-resolution rotating Scheimpflug camera and assess the agreement between the two instruments in measuring corneal power in eyes with keratoconus and post-laser in situ keratomileusis (LASIK). One eye each of 36 keratoconic patients and 20 subjects who had undergone LASIK was included in this prospective observational study. Two independent examiners worked in a random order to take three measurements of each eye with both instruments. Four parameters were measured on the anterior cornea: steep keratometry (Ks), flat keratometry (Kf), mean keratometry (Km), and astigmatism (Ks-Kf). Intra-examiner repeatability and inter-examiner reproducibility were evaluated by calculating the within-subject standard deviation (Sw) the coefficient of repeatability (R), the coefficient of variation (CoV), and the intraclass correlation coefficient (ICC). Agreement between instruments was tested with the Bland-Altman method by calculating the 95% limits of agreement (95% LoA). In keratoconic eyes, the intra-examiner and inter-examiner ICC were >0.95. As compared with measurement by high-resolution Placido disk-based topography, the intra-examiner R of the high-resolution rotating Scheimpflug camera was lower for Kf (0.32 vs 0.88), Ks (0.61 vs 0.88), and Km (0.32 vs 0.84) but higher for Ks-Kf (0.70 vs 0.57). Inter-examiner R values were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The 95% LoA were -1.28 to +0.55 for Kf, -1.36 to +0.99 for Ks, -1.08 to +0.50 for Km, and -1.11 to +1.48 for Ks-Kf. In the post-LASIK eyes, the intra-examiner and inter-examiner ICC were >0.87 for all parameters. The intra-examiner and inter-examiner R were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The intra-examiner R was 0.17 vs 0.88 for Kf, 0.21 vs 0.88 for Ks, 0.17 vs 0.86 for Km, and 0.28 vs 0.33 for Ks-Kf. The inter-examiner R was 0.09 vs 0.64 for Kf, 0.15 vs 0.56 for Ks, 0.09 vs 0.59 for Km, and 0.18 vs 0.23 for Ks-Kf. The 95% LoA were -0.54 to +0.58 for Kf, -0.51 to +0.53 for Ks and Km, and -0.28 to +0.27 for Ks-Kf. As compared with Placido disk-based topography, the high-resolution rotating Scheimpflug camera provides more repeatable and reproducible measurements of Ks, Kf and Ks in keratoconic and post-LASIK eyes. Agreement between instruments is fair in keratoconus and very good in post-LASIK eyes.
Penna, Rachele R.; de Sanctis, Ugo; Catalano, Martina; Brusasco, Luca; Grignolo, Federico M.
2017-01-01
AIM To compare the repeatability/reproducibility of measurement by high-resolution Placido disk-based topography with that of a high-resolution rotating Scheimpflug camera and assess the agreement between the two instruments in measuring corneal power in eyes with keratoconus and post-laser in situ keratomileusis (LASIK). METHODS One eye each of 36 keratoconic patients and 20 subjects who had undergone LASIK was included in this prospective observational study. Two independent examiners worked in a random order to take three measurements of each eye with both instruments. Four parameters were measured on the anterior cornea: steep keratometry (Ks), flat keratometry (Kf), mean keratometry (Km), and astigmatism (Ks-Kf). Intra-examiner repeatability and inter-examiner reproducibility were evaluated by calculating the within-subject standard deviation (Sw) the coefficient of repeatability (R), the coefficient of variation (CoV), and the intraclass correlation coefficient (ICC). Agreement between instruments was tested with the Bland-Altman method by calculating the 95% limits of agreement (95% LoA). RESULTS In keratoconic eyes, the intra-examiner and inter-examiner ICC were >0.95. As compared with measurement by high-resolution Placido disk-based topography, the intra-examiner R of the high-resolution rotating Scheimpflug camera was lower for Kf (0.32 vs 0.88), Ks (0.61 vs 0.88), and Km (0.32 vs 0.84) but higher for Ks-Kf (0.70 vs 0.57). Inter-examiner R values were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The 95% LoA were -1.28 to +0.55 for Kf, -1.36 to +0.99 for Ks, -1.08 to +0.50 for Km, and -1.11 to +1.48 for Ks-Kf. In the post-LASIK eyes, the intra-examiner and inter-examiner ICC were >0.87 for all parameters. The intra-examiner and inter-examiner R were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The intra-examiner R was 0.17 vs 0.88 for Kf, 0.21 vs 0.88 for Ks, 0.17 vs 0.86 for Km, and 0.28 vs 0.33 for Ks-Kf. The inter-examiner R was 0.09 vs 0.64 for Kf, 0.15 vs 0.56 for Ks, 0.09 vs 0.59 for Km, and 0.18 vs 0.23 for Ks-Kf. The 95% LoA were -0.54 to +0.58 for Kf, -0.51 to +0.53 for Ks and Km, and -0.28 to +0.27 for Ks-Kf. CONCLUSION As compared with Placido disk-based topography, the high-resolution rotating Scheimpflug camera provides more repeatable and reproducible measurements of Ks, Kf and Ks in keratoconic and post-LASIK eyes. Agreement between instruments is fair in keratoconus and very good in post-LASIK eyes. PMID:28393039
High spatial resolution infrared camera as ISS external experiment
NASA Astrophysics Data System (ADS)
Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan
High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.
NASA Astrophysics Data System (ADS)
Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.
1990-10-01
Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.
Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989
NASA Astrophysics Data System (ADS)
Csorba, Illes P.
Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.
Upgrading and testing program for narrow band high resolution planetary IR imaging spectrometer
NASA Technical Reports Server (NTRS)
Wattson, R. B.; Rappaport, S.
1977-01-01
An imaging spectrometer, intended primarily for observations of the outer planets, which utilizes an acoustically tuned optical filter (ATOF) and a charge coupled device (CCD) television camera was modified to improve spatial resolution and sensitivity. The upgraded instrument was a spatial resolving power of approximately 1 arc second, as defined by an f/7 beam at the CCD position and it has this resolution over the 50 arc second field of view. Less vignetting occurs and sensitivity is four times greater. The spectral resolution of 15 A over the wavelength interval 6500 A - 11,000 A is unchanged. Mechanical utility has been increased by the use of a honeycomb optical table, mechanically rigid yet adjustable optical component mounts, and a camera focus translation stage. The upgraded instrument was used to observe Venus and Saturn.
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles; ...
2016-05-16
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles
Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m 0gδx. Here, the symbols δE, δx, m 0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. Asmore » a result, this method allows different types of UCN spectroscopy and other applications.« less
Super resolution PLIF demonstrated in turbulent jet flows seeded with I2
NASA Astrophysics Data System (ADS)
Xu, Wenjiang; Liu, Ning; Ma, Lin
2018-05-01
Planar laser induced fluorescence (PLIF) represents an indispensable tool for flow and flame imaging. However, the PLIF technique suffers from limited spatial resolution or blurring in many situations, which restricts its applicability and capability. This work describes a new method, named SR-PLIF (super-resolution PLIF), to overcome these limitations and enhance the capability of PLIF. The method uses PLIF images captured simultaneously from two (or more) orientations to reconstruct a final PLIF image with resolution enhanced or blurring removed. This paper reports the development of the reconstruction algorithm, and the experimental demonstration of the SR-PLIF method both with controlled samples and with turbulent flows seeded with iodine vapor. Using controlled samples with two cameras, the spatial resolution in the best case was improved from 0.06 mm in the projections to 0.03 mm in the SR image, in terms of the spreading width of a sharp edge. With turbulent flows, an image sharpness measure was developed to quantify the spatial resolution, and SR reconstruction with two cameras can effectively improve the spatial resolution compared to the projections in terms of the sharpness measure.
Performance evaluation and optimization of the MiniPET-II scanner
NASA Astrophysics Data System (ADS)
Lajtos, Imre; Emri, Miklos; Kis, Sandor A.; Opposits, Gabor; Potari, Norbert; Kiraly, Beata; Nagy, Ferenc; Tron, Lajos; Balkay, Laszlo
2013-04-01
This paper presents results of the performance of a small animal PET system (MiniPET-II) installed at our Institute. MiniPET-II is a full ring camera that includes 12 detector modules in a single ring comprised of 1.27×1.27×12 mm3 LYSO scintillator crystals. The axial field of view and the inner ring diameter are 48 mm and 211 mm, respectively. The goal of this study was to determine the NEMA-NU4 performance parameters of the scanner. In addition, we also investigated how the calculated parameters depend on the coincidence time window (τ=2, 3 and 4 ns) and the low threshold settings of the energy window (Elt=250, 350 and 450 keV). Independent measurements supported optimization of the effective system radius and the coincidence time window of the system. We found that the optimal coincidence time window and low threshold energy window are 3 ns and 350 keV, respectively. The spatial resolution was close to 1.2 mm in the center of the FOV with an increase of 17% at the radial edge. The maximum value of the absolute sensitivity was 1.37% for a point source. Count rate tests resulted in peak values for the noise equivalent count rate (NEC) curve and scatter fraction of 14.2 kcps (at 36 MBq) and 27.7%, respectively, using the rat phantom. Numerical values of the same parameters obtained for the mouse phantom were 55.1 kcps (at 38.8 MBq) and 12.3%, respectively. The recovery coefficients of the image quality phantom ranged from 0.1 to 0.87. Altering the τ and Elt resulted in substantial changes in the NEC peak and the sensitivity while the effect on the image quality was negligible. The spatial resolution proved to be, as expected, independent of the τ and Elt. The calculated optimal effective system radius (resulting in the best image quality) was 109 mm. Although the NEC peak parameters do not compare favorably with those of other small animal scanners, it can be concluded that under normal counting situations the MiniPET-II imaging capability assures remarkably good image quality, sensitivity and spatial resolution.
In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2017-01-25
Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the literature. Statistically, the 3D accuracy obtained in the in-air environment was poorer (p<10 -5 ) than the one in the underwater environment, across all the tested camera configurations. Related to the repeatability of the camera parameters, we found a very low variability in both environments (1.7% and 2.9%, in-air and underwater). This result encourage the use of ASC technology to perform quantitative reconstruction both in-air and underwater environments. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, Yijing; Thom, Maria; Ebner, Michael; Wykes, Victoria; Desjardins, Adrien; Miserocchi, Anna; Ourselin, Sebastien; McEvoy, Andrew W.; Vercauteren, Tom
2017-11-01
In high-grade glioma surgery, tumor resection is often guided by intraoperative fluorescence imaging. 5-aminolevulinic acid-induced protoporphyrin IX (PpIX) provides fluorescent contrast between normal brain tissue and glioma tissue, thus achieving improved tumor delineation and prolonged patient survival compared with conventional white-light-guided resection. However, commercially available fluorescence imaging systems rely solely on visual assessment of fluorescence patterns by the surgeon, which makes the resection more subjective than necessary. We developed a wide-field spectrally resolved fluorescence imaging system utilizing a Generation II scientific CMOS camera and an improved computational model for the precise reconstruction of the PpIX concentration map. In our model, the tissue's optical properties and illumination geometry, which distort the fluorescent emission spectra, are considered. We demonstrate that the CMOS-based system can detect low PpIX concentration at short camera exposure times, while providing high-pixel resolution wide-field images. We show that total variation regularization improves the contrast-to-noise ratio of the reconstructed quantitative concentration map by approximately twofold. Quantitative comparison between the estimated PpIX concentration and tumor histopathology was also investigated to further evaluate the system.
NASA Astrophysics Data System (ADS)
Messineo, Maria; Figer, Donald F.; Davies, Ben; Kudritzki, R. P.; Rich, R. Michael; MacKenty, John; Trombley, Christine
2010-01-01
We present Hubble Space Telescope/Near-Infrared Camera and Multi-Object Spectrometer photometry, and low-resolution K-band spectra of the GLIMPSE9 stellar cluster. The newly obtained color-magnitude diagram shows a cluster sequence with H - KS = ~1 mag, indicating an interstellar extinction A _K_s = 1.6 ± 0.2 mag. The spectra of the three brightest stars show deep CO band heads, which indicate red supergiants with spectral type M1-M2. Two 09-B2 supergiants are also identified, which yield a spectrophotometric distance of 4.2 ± 0.4 kpc. Presuming that the population is coeval, we derive an age between 15 and 27 Myr, and a total cluster mass of 1600 ± 400 M sun, integrated down to 1 M sun. In the vicinity of GLIMPSE9 are several H II regions and supernova remnants, all of which (including GLIMPSE9) are probably associated with a giant molecular cloud (GMC) in the inner galaxy. GLIMPSE9 probably represents one episode of massive star formation in this GMC. We have identified several other candidate stellar clusters of the same complex.
Remote Sensing Simulation Activities for Earthlings
ERIC Educational Resources Information Center
Krockover, Gerald H.; Odden, Thomas D.
1977-01-01
Suggested are activities using a Polaroid camera to illustrate the capabilities of remote sensing. Reading materials from the National Aeronautics and Space Administration (NASA) are suggested. Methods for (1) finding a camera's focal length, (2) calculating ground dimension photograph simulation, and (3) limiting size using film resolution are…
ERIC Educational Resources Information Center
Reynolds, Ronald F.
1984-01-01
Describes the basic components of a space telescope that will be launched during a 1986 space shuttle mission. These components include a wide field/planetary camera, faint object spectroscope, high-resolution spectrograph, high-speed photometer, faint object camera, and fine guidance sensors. Data to be collected from these instruments are…
A high resolution IR/visible imaging system for the W7-X limiter
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.
2016-11-01
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.
A high resolution IR/visible imaging system for the W7-X limiter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.
A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
NASA Astrophysics Data System (ADS)
Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy
2013-10-01
The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.
Zhu, Banghe; Rasmussen, John C.; Litorja, Maritoni
2017-01-01
To date, no emerging preclinical or clinical near-infrared fluorescence (NIRF) imaging devices for non-invasive and/or surgical guidance have their performances validated on working standards with SI units of radiance that enable comparison or quantitative quality assurance. In this work, we developed and deployed a methodology to calibrate a stable, solid phantom for emission radiance with units of mW · sr−1 · cm−2 for use in characterizing the measurement sensitivity of ICCD and IsCMOS detection, signal-to-noise ratio, and contrast. In addition, at calibrated radiances, we assess transverse and lateral resolution of ICCD and IsCMOS camera systems. The methodology allowed determination of superior SNR of the ICCD over the IsCMOS camera system and superior resolution of the IsCMOS over the ICCD camera system. Contrast depended upon the camera settings (binning and integration time) and gain of intensifier. Finally, because of architecture of CMOS and CCD camera systems resulting in vastly different performance, we comment on the utility of these systems for small animal imaging as well as clinical applications for non-invasive and surgical guidance. PMID:26552078
Volunteers Help Decide Where to Point Mars Camera
2015-07-22
This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Suzuki, Mayumi; Kato, Katsuhiko; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Ogata, Yoshimune; Hatazawa, Jun
2016-09-01
Although iodine 131 (I-131) is used for radionuclide therapy, high resolution images are difficult to obtain with conventional gamma cameras because of the high energy of I-131 gamma photons (364 keV). Cerenkov-light imaging is a possible method for beta emitting radionuclides, and I-131 (606 MeV maximum beta energy) is a candidate to obtain high resolution images. We developed a high energy gamma camera system for I-131 radionuclide and combined it with a Cerenkov-light imaging system to form a gamma-photon/Cerenkov-light hybrid imaging system to compare the simultaneously measured images of these two modalities. The high energy gamma imaging detector used 0.85-mm×0.85-mm×10-mm thick GAGG scintillator pixels arranged in a 44×44 matrix with a 0.1-mm thick reflector and optical coupled to a Hamamatsu 2 in. square position sensitive photomultiplier tube (PSPMT: H12700 MOD). The gamma imaging detector was encased in a 2 cm thick tungsten shield, and a pinhole collimator was mounted on its top to form a gamma camera system. The Cerenkov-light imaging system was made of a high sensitivity cooled CCD camera. The Cerenkov-light imaging system was combined with the gamma camera using optical mirrors to image the same area of the subject. With this configuration, we simultaneously imaged the gamma photons and the Cerenkov-light from I-131 in the subjects. The spatial resolution and sensitivity of the gamma camera system for I-131 were respectively 3 mm FWHM and 10 cps/MBq for the high sensitivity collimator at 10 cm from the collimator surface. The spatial resolution of the Cerenkov-light imaging system was 0.64 mm FWHM at 10 cm from the system surface. Thyroid phantom and rat images were successfully obtained with the developed gamma-photon/Cerenkov-light hybrid imaging system, allowing direct comparison of these two modalities. Our developed gamma-photon/Cerenkov-light hybrid imaging system will be useful to evaluate the advantages and disadvantages of these two modalities.
Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J
2015-02-01
To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.
Surveying the Newly Digitized Apollo Metric Images for Highland Fault Scarps on the Moon
NASA Astrophysics Data System (ADS)
Williams, N. R.; Pritchard, M. E.; Bell, J. F.; Watters, T. R.; Robinson, M. S.; Lawrence, S.
2009-12-01
The presence and distribution of thrust faults on the Moon have major implications for lunar formation and thermal evolution. For example, thermal history models for the Moon imply that most of the lunar interior was initially hot. As the Moon cooled over time, some models predict global-scale thrust faults should form as stress builds from global thermal contraction. Large-scale thrust fault scarps with lengths of hundreds of kilometers and maximum relief of up to a kilometer or more, like those on Mercury, are not found on the Moon; however, relatively small-scale linear and curvilinear lobate scarps with maximum lengths typically around 10 km have been observed in the highlands [Binder and Gunga, Icarus, v63, 1985]. These small-scale scarps are interpreted to be thrust faults formed by contractional stresses with relatively small maximum (tens of meters) displacements on the faults. These narrow, low relief landforms could only be identified in the highest resolution Lunar Orbiter and Apollo Panoramic Camera images and under the most favorable lighting conditions. To date, the global distribution and other properties of lunar lobate faults are not well understood. The recent micron-resolution scanning and digitization of the Apollo Mapping Camera (Metric) photographic negatives [Lawrence et al., NLSI Conf. #1415, 2008; http://wms.lroc.asu.edu/apollo] provides a new dataset to search for potential scarps. We examined more than 100 digitized Metric Camera image scans, and from these identified 81 images with favorable lighting (incidence angles between about 55 and 80 deg.) to manually search for features that could be potential tectonic scarps. Previous surveys based on Panoramic Camera and Lunar Orbiter images found fewer than 100 lobate scarps in the highlands; in our Apollo Metric Camera image survey, we have found additional regions with one or more previously unidentified linear and curvilinear features on the lunar surface that may represent lobate thrust fault scarps. In this presentation we review the geologic characteristics and context of these newly-identified, potentially tectonic landforms. The lengths and relief of some of these linear and curvilinear features are consistent with previously identified lobate scarps. Most of these features are in the highlands, though a few occur along the edges of mare and/or crater ejecta deposits. In many cases the resolution of the Metric Camera frames (~10 m/pix) is not adequate to unequivocally determine the origin of these features. Thus, to assess if the newly identified features have tectonic or other origins, we are examining them in higher-resolution Panoramic Camera (currently being scanned) and Lunar Reconnaissance Orbiter Camera Narrow Angle Camera images [Watters et al., this meeting, 2009].
UWB Tracking System Design for Free-Flyers
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Phan, Chan; Ngo, Phong; Gross, Julia; Dusl, John
2004-01-01
This paper discusses an ultra-wideband (UWB) tracking system design effort for Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A tracking algorithm TDOA (Time Difference of Arrival) that operates cooperatively with the UWB system is developed in this research effort. Matlab simulations show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. Lab experiments demonstrate the UWB tracking capability with fine resolution.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
ProxiScanâ¢: A Novel Camera for Imaging Prostate Cancer
Ralph James
2017-12-09
ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t
Improved head-controlled TV system produces high-quality remote image
NASA Technical Reports Server (NTRS)
Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.
1967-01-01
Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.
Vandenbroucke, A.; Innes, D.; Lau, F. W. Y.; Hsu, D. F. C.; Reynolds, P. D.; Levin, Craig S.
2015-01-01
Purpose: Silicon photodetectors are of significant interest for use in positron emission tomography (PET) systems due to their compact size, insensitivity to magnetic fields, and high quantum efficiency. However, one of their main disadvantages is fluctuations in temperature cause strong shifts in gain of the devices. PET system designs with high photodetector density suffer both increased thermal density and constrained options for thermally regulating the devices. This paper proposes a method of thermally regulating densely packed silicon photodetectors in the context of a 1 mm3 resolution, high-sensitivity PET camera dedicated to breast imaging. Methods: The PET camera under construction consists of 2304 units, each containing two 8 × 8 arrays of 1 mm3 LYSO crystals coupled to two position sensitive avalanche photodiodes (PSAPD). A subsection of the proposed camera with 512 PSAPDs has been constructed. The proposed thermal regulation design uses water-cooled heat sinks, thermoelectric elements, and thermistors to measure and regulate the temperature of the PSAPDs in a novel manner. Active cooling elements, placed at the edge of the detector stack due to limited access, are controlled based on collective leakage current and temperature measurements in order to keep all the PSAPDs at a consistent temperature. This thermal regulation design is characterized for the temperature profile across the camera and for the time required for cooling changes to propagate across the camera. These properties guide the implementation of a software-based, cascaded proportional-integral-derivative control loop that controls the current through the Peltier elements by monitoring thermistor temperature and leakage current. The stability of leakage current, temperature within the system using this control loop is tested over a period of 14 h. The energy resolution is then measured over a period of 8.66 h. Finally, the consistency of PSAPD gain between independent operations of the camera over 10 days is tested. Results: The PET camera maintains a temperature of 18.00 ± 0.05 °C over the course of 12 h while the ambient temperature varied 0.61 °C, from 22.83 to 23.44 °C. The 511 keV photopeak energy resolution over a period of 8.66 h is measured to be 11.3% FWHM with a maximum photopeak fluctuation of 4 keV. Between measurements of PSAPD gain separated by at least 2 day, the maximum photopeak shift was 6 keV. Conclusions: The proposed thermal regulation scheme for tightly packed silicon photodetectors provides for stable operation of the constructed subsection of a PET camera over long durations of time. The energy resolution of the system is not degraded despite shifts in ambient temperature and photodetector heat generation. The thermal regulation scheme also provides a consistent operating environment between separate runs of the camera over different days. Inter-run consistency allows for reuse of system calibration parameters from study to study, reducing the time required to calibrate the system and hence to obtain a reconstructed image. PMID:25563270
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freese, D. L.; Vandenbroucke, A.; Innes, D.
2015-01-15
Purpose: Silicon photodetectors are of significant interest for use in positron emission tomography (PET) systems due to their compact size, insensitivity to magnetic fields, and high quantum efficiency. However, one of their main disadvantages is fluctuations in temperature cause strong shifts in gain of the devices. PET system designs with high photodetector density suffer both increased thermal density and constrained options for thermally regulating the devices. This paper proposes a method of thermally regulating densely packed silicon photodetectors in the context of a 1 mm{sup 3} resolution, high-sensitivity PET camera dedicated to breast imaging. Methods: The PET camera under constructionmore » consists of 2304 units, each containing two 8 × 8 arrays of 1 mm{sup 3} LYSO crystals coupled to two position sensitive avalanche photodiodes (PSAPD). A subsection of the proposed camera with 512 PSAPDs has been constructed. The proposed thermal regulation design uses water-cooled heat sinks, thermoelectric elements, and thermistors to measure and regulate the temperature of the PSAPDs in a novel manner. Active cooling elements, placed at the edge of the detector stack due to limited access, are controlled based on collective leakage current and temperature measurements in order to keep all the PSAPDs at a consistent temperature. This thermal regulation design is characterized for the temperature profile across the camera and for the time required for cooling changes to propagate across the camera. These properties guide the implementation of a software-based, cascaded proportional-integral-derivative control loop that controls the current through the Peltier elements by monitoring thermistor temperature and leakage current. The stability of leakage current, temperature within the system using this control loop is tested over a period of 14 h. The energy resolution is then measured over a period of 8.66 h. Finally, the consistency of PSAPD gain between independent operations of the camera over 10 days is tested. Results: The PET camera maintains a temperature of 18.00 ± 0.05 °C over the course of 12 h while the ambient temperature varied 0.61 °C, from 22.83 to 23.44 °C. The 511 keV photopeak energy resolution over a period of 8.66 h is measured to be 11.3% FWHM with a maximum photopeak fluctuation of 4 keV. Between measurements of PSAPD gain separated by at least 2 day, the maximum photopeak shift was 6 keV. Conclusions: The proposed thermal regulation scheme for tightly packed silicon photodetectors provides for stable operation of the constructed subsection of a PET camera over long durations of time. The energy resolution of the system is not degraded despite shifts in ambient temperature and photodetector heat generation. The thermal regulation scheme also provides a consistent operating environment between separate runs of the camera over different days. Inter-run consistency allows for reuse of system calibration parameters from study to study, reducing the time required to calibrate the system and hence to obtain a reconstructed image.« less
51. ARAII. Camera looking southeast at foundation piers for SL1 ...
51. ARA-II. Camera looking southeast at foundation piers for SL-1 reactor building support. August 22, 1957. Ineel photo no. 57-4212. Photographer: Jack L. Anderson. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.
High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.
2000-01-01
As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.
Temporal Coding of Volumetric Imagery
NASA Astrophysics Data System (ADS)
Llull, Patrick Ryan
'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain
NASA Astrophysics Data System (ADS)
Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.
2018-04-01
The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.
New technology and techniques for x-ray mirror calibration at PANTER
NASA Astrophysics Data System (ADS)
Freyberg, Michael J.; Budau, Bernd; Burkert, Wolfgang; Friedrich, Peter; Hartner, Gisela; Misaki, Kazutami; Mühlegger, Martin
2008-07-01
The PANTER X-ray Test Facility has been utilized successfully for developing and calibrating X-ray astronomical instrumentation for observatories such as ROSAT, Chandra, XMM-Newton, Swift, etc. Future missions like eROSITA, SIMBOL-X, or XEUS require improved spatial resolution and broader energy band pass, both for optics and for cameras. Calibration campaigns at PANTER have made use of flight spare instrumentation for space applications; here we report on a new dedicated CCD camera for on-ground calibration, called TRoPIC. As the CCD is similar to ones used for eROSITA (pn-type, back-illuminated, 75 μm pixel size, frame store mode, 450 μm micron wafer thickness, etc.) it can serve as prototype for eROSITA camera development. New techniques enable and enhance the analysis of measurements of eROSITA shells or silicon pore optics. Specifically, we show how sub-pixel resolution can be utilized to improve spatial resolution and subsequently the characterization of of mirror shell quality and of point spread function parameters in particular, also relevant for position reconstruction of astronomical sources in orbit.
Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)
NASA Astrophysics Data System (ADS)
MoIIberg, Bernard H.
1981-11-01
The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.
Compact CdZnTe-based gamma camera for prostate cancer imaging
NASA Astrophysics Data System (ADS)
Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.
2011-06-01
In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-01-01
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets. PMID:29748495
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-05-10
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
Characterization of the LBNL PEM Camera
NASA Astrophysics Data System (ADS)
Wang, G.-C.; Huber, J. S.; Moses, W. W.; Qi, J.; Choong, W.-S.
2006-06-01
We present the tomographic images and performance measurements of the LBNL positron emission mammography (PEM) camera, a specially designed positron emission tomography (PET) camera that utilizes PET detector modules with depth of interaction measurement capability to achieve both high sensitivity and high resolution for breast cancer detection. The camera currently consists of 24 detector modules positioned as four detector banks to cover a rectangular patient port that is 8.2/spl times/6 cm/sup 2/ with a 5 cm axial extent. Each LBNL PEM detector module consists of 64 3/spl times/3/spl times/30 mm/sup 3/ LSO crystals coupled to a single photomultiplier tube (PMT) and an 8/spl times/8 silicon photodiode array (PD). The PMT provides accurate timing, the PD identifies the crystal of interaction, the sum of the PD and PMT signals (PD+PMT) provides the total energy, and the PD/(PD+PMT) ratio determines the depth of interaction. The performance of the camera has been evaluated by imaging various phantoms. The full-width-at-half-maximum (FWHM) spatial resolution changes slightly from 1.9 mm to 2.1 mm when measured at the center and corner of the field of the view, respectively, using a 6 ns coincidence timing window and a 300-750 keV energy window. With the same setup, the peak sensitivity of the camera is 1.83 kcps//spl mu/Ci.
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
UWB Tracking System Design with TDOA Algorithm
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan
2006-01-01
This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).
High-resolution Ceres Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images
NASA Astrophysics Data System (ADS)
Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.
2017-06-01
The Dawn spacecraft Framing Camera (FC) acquired over 31,300 clear filter images of Ceres with a resolution of about 35 m/pxl during the eleven cycles in the Low Altitude Mapping Orbit (LAMO) phase between December 16 2015 and August 8 2016. We ortho-rectified the images from the first four cycles and produced a global, high-resolution, uncontrolled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 62 tiles mapped at a scale of 1:250,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page [http://dawngis.dlr.de/atlas] and will become available through the NASA Planetary Data System (PDS) (http://pdssbn.astro.umd.edu/).
LWIR NUC using an uncooled microbolometer camera
NASA Astrophysics Data System (ADS)
Laveigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian; McHugh, Steve
2010-04-01
Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector. Ideally, NUC will be performed in the same band in which the scene projector will be used. Cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, however, cooled large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Santa Barbara Infrared, Inc. reports progress on a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution are the main difficulties. A discussion of processes developed to mitigate these issues follows.
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.
The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars
NASA Astrophysics Data System (ADS)
Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.
2014-04-01
The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.
A design of camera simulator for photoelectric image acquisition system
NASA Astrophysics Data System (ADS)
Cai, Guanghui; Liu, Wen; Zhang, Xin
2015-02-01
In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.
A new spherical scanning system for infrared reflectography of paintings
NASA Astrophysics Data System (ADS)
Gargano, M.; Cavaliere, F.; Viganò, D.; Galli, A.; Ludwig, N.
2017-03-01
Infrared reflectography is an imaging technique used to visualize the underdrawings of ancient paintings; it relies on the fact that most pigment layers are quite transparent to infrared radiation in the spectral band between 0.8 μm and 2.5 μm. InGaAs sensor cameras are nowadays the most used devices to visualize the underdrawings but due to the small size of the detectors, these cameras are usually mounted on scanning systems to record high resolution reflectograms. This work describes a portable scanning system prototype based on a peculiar spherical scanning system built through a light weight and low cost motorized head. The motorized head was built with the purpose of allowing the refocusing adjustment needed to compensate the variable camera-painting distance during the rotation of the camera. The prototype has been tested first in laboratory and then in-situ for the Giotto panel "God the Father with Angels" with a 256 pixel per inch resolution. The system performance is comparable with that of other reflectographic devices with the advantage of extending the scanned area up to 1 m × 1 m, with a 40 min scanning time. The present configuration can be easily modified to increase the resolution up to 560 pixels per inch or to extend the scanned area up to 2 m × 2 m.
Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment
NASA Astrophysics Data System (ADS)
Soliz, P.; Larichev, A.; Zamora, G.; Murillo, S.; Barriga, E. S.
2010-02-01
Researchers have sought to gain greater insight into the mechanisms of the retina and the optic disc at high spatial resolutions that would enable the visualization of small structures such as photoreceptors and nerve fiber bundles. The sources of retinal image quality degradation are aberrations within the human eye, which limit the achievable resolution and the contrast of small image details. To overcome these fundamental limitations, researchers have been applying adaptive optics (AO) techniques to correct for the aberrations. Today, deformable mirror based adaptive optics devices have been developed to overcome the limitations of standard fundus cameras, but at prices that are typically unaffordable for most clinics. In this paper we demonstrate a clinically viable fundus camera with auto-focus and astigmatism correction that is easy to use and has improved resolution. We have shown that removal of low-order aberrations results in significantly better resolution and quality images. Additionally, through the application of image restoration and super-resolution techniques, the images present considerably improved quality. The improvements lead to enhanced visualization of retinal structures associated with pathology.
,
2008-01-01
Interested in a photograph of the first space walk by an American astronaut, or the first photograph from space of a solar eclipse? Or maybe your interest is in a specific geologic, oceanic, or meteorological phenomenon? The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center is making photographs of the Earth taken from space available for search, download, and ordering. These photographs were taken by Gemini mission astronauts with handheld cameras or by the Large Format Camera that flew on space shuttle Challenger in October 1984. Space photographs are distributed by EROS only as high-resolution scanned or medium-resolution digital products.
Dynamic frequency-domain interferometer for absolute distance measurements with high resolution
NASA Astrophysics Data System (ADS)
Weng, Jidong; Liu, Shenggang; Ma, Heli; Tao, Tianjiong; Wang, Xiang; Liu, Cangli; Tan, Hua
2014-11-01
A unique dynamic frequency-domain interferometer for absolute distance measurement has been developed recently. This paper presents the working principle of the new interferometric system, which uses a photonic crystal fiber to transmit the wide-spectrum light beams and a high-speed streak camera or frame camera to record the interference stripes. Preliminary measurements of harmonic vibrations of a speaker, driven by a radio, and the changes in the tip clearance of a rotating gear wheel show that this new type of interferometer has the ability to perform absolute distance measurements both with high time- and distance-resolution.
Image quality prediction: an aid to the Viking Lander imaging investigation on Mars.
Huck, F O; Wall, S D
1976-07-01
Two Viking spacecraft scheduled to land on Mars in the summer of 1976 will return multispectral panoramas of the Martian surface with resolutions 4 orders of magnitude higher than have been previously obtained and stereo views with resolutions approaching that of the human eye. Mission constraints and uncertainties require a carefully planned imaging investigation that is supported by a computer model of camera response and surface features to aid in diagnosing camera performance, in establishing a preflight imaging strategy, and in rapidly revising this strategy if pictures returned from Mars reveal unfavorable or unanticipated conditions.
Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong
2017-02-15
Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.
Characterization of SWIR cameras by MRC measurements
NASA Astrophysics Data System (ADS)
Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.
2014-05-01
Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.
HRSC: High resolution stereo camera
Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.
2009-01-01
The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-02-01
Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.
Single-pixel camera with one graphene photodetector.
Li, Gongxin; Wang, Wenxue; Wang, Yuechao; Yang, Wenguang; Liu, Lianqing
2016-01-11
Consumer cameras in the megapixel range are ubiquitous, but the improvement of them is hindered by the poor performance and high cost of traditional photodetectors. Graphene, a two-dimensional micro-/nano-material, recently has exhibited exceptional properties as a sensing element in a photodetector over traditional materials. However, it is difficult to fabricate a large-scale array of graphene photodetectors to replace the traditional photodetector array. To take full advantage of the unique characteristics of the graphene photodetector, in this study we integrated a graphene photodetector in a single-pixel camera based on compressive sensing. To begin with, we introduced a method called laser scribing for fabrication the graphene. It produces the graphene components in arbitrary patterns more quickly without photoresist contamination as do traditional methods. Next, we proposed a system for calibrating the optoelectrical properties of micro/nano photodetectors based on a digital micromirror device (DMD), which changes the light intensity by controlling the number of individual micromirrors positioned at + 12°. The calibration sensitivity is driven by the sum of all micromirrors of the DMD and can be as high as 10(-5)A/W. Finally, the single-pixel camera integrated with one graphene photodetector was used to recover a static image to demonstrate the feasibility of the single-pixel imaging system with the graphene photodetector. A high-resolution image can be recovered with the camera at a sampling rate much less than Nyquist rate. The study was the first demonstration for ever record of a macroscopic camera with a graphene photodetector. The camera has the potential for high-speed and high-resolution imaging at much less cost than traditional megapixel cameras.
New opportunities for quality enhancing of images captured by passive THz camera
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2014-10-01
As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.
Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras
1990-04-01
poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital
Processing Ocean Images to Detect Large Drift Nets
NASA Technical Reports Server (NTRS)
Veenstra, Tim
2009-01-01
A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.
Geomorphologic mapping of the lunar crater Tycho and its impact melt deposits
NASA Astrophysics Data System (ADS)
Krüger, T.; van der Bogert, C. H.; Hiesinger, H.
2016-07-01
Using SELENE/Kaguya Terrain Camera and Lunar Reconnaissance Orbiter Camera (LROC) data, we produced a new, high-resolution (10 m/pixel), geomorphological and impact melt distribution map for the lunar crater Tycho. The distal ejecta blanket and crater rays were investigated using LROC wide-angle camera (WAC) data (100 m/pixel), while the fine-scale morphologies of individual units were documented using high resolution (∼0.5 m/pixel) LROC narrow-angle camera (NAC) frames. In particular, Tycho shows a large coherent melt sheet on the crater floor, melt pools and flows along the terraced walls, and melt pools on the continuous ejecta blanket. The crater floor of Tycho exhibits three distinct units, distinguishable by their elevation and hummocky surface morphology. The distribution of impact melt pools and ejecta, as well as topographic asymmetries, support the formation of Tycho as an oblique impact from the W-SW. The asymmetric ejecta blanket, significantly reduced melt emplacement uprange, and the depressed uprange crater rim at Tycho suggest an impact angle of ∼25-45°.
Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.
2012-01-01
We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.
The Subject Headings of the Morris Swett Library, USAFAS. Revised.
1980-05-15
Royal Armoured Corps. x Armored force. Armored troops. Armored units. Mechanized force. Mechanized units. Mechanized warfare. Tank companies. Tank...g., U. S. Ary- Physical training. CAMERA MOUNTS. CAMERAS, AERIAL. II I • • ! I CAMOUFLAGE. (U 166.3h) x Air arm - amouflage. - Bibliography. - Drape
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
NASA Astrophysics Data System (ADS)
Nuster, Robert; Wurzinger, Gerhild; Paltauf, Guenther
2017-03-01
CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.
Space telescope scientific instruments
NASA Technical Reports Server (NTRS)
Leckrone, D. S.
1979-01-01
The paper describes the Space Telescope (ST) observatory, the design concepts of the five scientific instruments which will conduct the initial observatory observations, and summarizes their astronomical capabilities. The instruments are the wide-field and planetary camera (WFPC) which will receive the highest quality images, the faint-object camera (FOC) which will penetrate to the faintest limiting magnitudes and achieve the finest angular resolution possible, and the faint-object spectrograph (FOS), which will perform photon noise-limited spectroscopy and spectropolarimetry on objects substantially fainter than those accessible to ground-based spectrographs. In addition, the high resolution spectrograph (HRS) will provide higher spectral resolution with greater photometric accuracy than previously possible in ultraviolet astronomical spectroscopy, and the high-speed photometer will achieve precise time-resolved photometric observations of rapidly varying astronomical sources on short time scales.
MSE spectrograph optical design: a novel pupil slicing technique
NASA Astrophysics Data System (ADS)
Spanò, P.
2014-07-01
The Maunakea Spectroscopic Explorer shall be mainly devoted to perform deep, wide-field, spectroscopic surveys at spectral resolutions from ~2000 to ~20000, at visible and near-infrared wavelengths. Simultaneous spectral coverage at low resolution is required, while at high resolution only selected windows can be covered. Moreover, very high multiplexing (3200 objects) must be obtained at low resolution. At higher resolutions a decreased number of objects (~800) can be observed. To meet such high demanding requirements, a fiber-fed multi-object spectrograph concept has been designed by pupil-slicing the collimated beam, followed by multiple dispersive and camera optics. Different resolution modes are obtained by introducing anamorphic lenslets in front of the fiber arrays. The spectrograph is able to switch between three resolution modes (2000, 6500, 20000) by removing the anamorphic lenses and exchanging gratings. Camera lenses are fixed in place to increase stability. To enhance throughput, VPH first-order gratings has been preferred over echelle gratings. Moreover, throughput is kept high over all wavelength ranges by splitting light into more arms by dichroic beamsplitters and optimizing efficiency for each channel by proper selection of glass materials, coatings, and grating parameters.
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
Slit-Slat Collimator Equipped Gamma Camera for Whole-Mouse SPECT-CT Imaging
NASA Astrophysics Data System (ADS)
Cao, Liji; Peter, Jörg
2012-06-01
A slit-slat collimator is developed for a gamma camera intended for small-animal imaging (mice). The tungsten housing of a roof-shaped collimator forms a slit opening, and the slats are made of lead foils separated by sparse polyurethane material. Alignment of the collimator with the camera's pixelated crystal is performed by adjusting a micrometer screw while monitoring a Co-57 point source for maximum signal intensity. For SPECT, the collimator forms a cylindrical field-of-view enabling whole mouse imaging with transaxial magnification and constant on-axis sensitivity over the entire axial direction. As the gamma camera is part of a multimodal imaging system incorporating also x-ray CT, five parameters corresponding to the geometric displacements of the collimator as well as to the mechanical co-alignment between the gamma camera and the CT subsystem are estimated by means of bimodal calibration sources. To illustrate the performance of the slit-slat collimator and to compare its performance to a single pinhole collimator, a Derenzo phantom study is performed. Transaxial resolution along the entire long axis is comparable to a pinhole collimator of same pinhole diameter. Axial resolution of the slit-slat collimator is comparable to that of a parallel beam collimator. Additionally, data from an in-vivo mouse study are presented.
A compact 16-module camera using 64-pixel CsI(Tl)/Si p-i-n photodiode imaging modules
NASA Astrophysics Data System (ADS)
Choong, W.-S.; Gruber, G. J.; Moses, W. W.; Derenzo, S. E.; Holland, S. E.; Pedrali-Noy, M.; Krieger, B.; Mandelli, E.; Meddeler, G.; Wang, N. W.; Witt, E. K.
2002-10-01
We present a compact, configurable scintillation camera employing a maximum of 16 individual 64-pixel imaging modules resulting in a 1024-pixel camera covering an area of 9.6 cm/spl times/9.6 cm. The 64-pixel imaging module consists of optically isolated 3 mm/spl times/3 mm/spl times/5 mm CsI(Tl) crystals coupled to a custom array of Si p-i-n photodiodes read out by a custom integrated circuit (IC). Each imaging module plugs into a readout motherboard that controls the modules and interfaces with a data acquisition card inside a computer. For a given event, the motherboard employs a custom winner-take-all IC to identify the module with the largest analog output and to enable the output address bits of the corresponding module's readout IC. These address bits identify the "winner" pixel within the "winner" module. The peak of the largest analog signal is found and held using a peak detect circuit, after which it is acquired by an analog-to-digital converter on the data acquisition card. The camera is currently operated with four imaging modules in order to characterize its performance. At room temperature, the camera demonstrates an average energy resolution of 13.4% full-width at half-maximum (FWHM) for the 140-keV emissions of /sup 99m/Tc. The system spatial resolution is measured using a capillary tube with an inner diameter of 0.7 mm and located 10 cm from the face of the collimator. Images of the line source in air exhibit average system spatial resolutions of 8.7- and 11.2-mm FWHM when using an all-purpose and high-sensitivity parallel hexagonal holes collimator, respectively. These values do not change significantly when an acrylic scattering block is placed between the line source and the camera.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
NASA Technical Reports Server (NTRS)
Ivanov, Anton B.
2003-01-01
The Mars Orbiter Camera (MOC) has been operating on board of the Mars Global Surveyor (MGS) spacecraft since 1998. It consists of three cameras - Red and Blue Wide Angle cameras (FOV=140 deg.) and Narrow Angle camera (FOV=0.44 deg.). The Wide Angle camera allows surface resolution down to 230 m/pixel and the Narrow Angle camera - down to 1.5 m/pixel. This work is a continuation of the project, which we have reported previously. Since then we have refined and improved our stereo correlation algorithm and have processed many more stereo pairs. We will discuss results of our stereo pair analysis located in the Mars Exploration rovers (MER) landing sites and address feasibility of recovering topography from stereo pairs (especially in the polar regions), taken during MGS 'Relay-16' mode.
Evaluation of a ''CMOS'' Imager for Shadow Mask Hard X-ray Telescope
NASA Technical Reports Server (NTRS)
Desai, Upendra D.; Orwig, Larry E.; Oergerle, William R. (Technical Monitor)
2002-01-01
We have developed a hard x-ray coder that provides high angular resolution imaging capability using a coarse position sensitive image plane detector. The coder consists of two Fresnel zone plates. (FZP) Two such 'FZP's generate Moire fringe patterns whose frequency and orientation define the arrival direction of a beam with respect to telescope axis. The image plane detector needs to resolve the Moire fringe pattern. Pixilated detectors can be used as an image plane detector. The recently available 'CMOS' imager could provide a very low power large area image plane detector for hard x-rays. We have looked into a unit made by Rad-Icon Imaging Corp. The Shadow-Box 1024 x-ray camera is a high resolution 1024xl024 pixel detector of 50x50 mm area. It is a very low power, stand alone camera. We present some preliminary results of our investigation of evaluation of such camera.
Measuring high-resolution sky luminance distributions with a CCD camera.
Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther
2013-03-10
We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.
In-vessel visible inspection system on KSTAR
NASA Astrophysics Data System (ADS)
Chung, Jinil; Seo, D. C.
2008-08-01
To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.
Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera
NASA Technical Reports Server (NTRS)
Stanojev, B. J.; Houts, M.
2004-01-01
Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.
NASA Astrophysics Data System (ADS)
Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.
2017-11-01
A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.
Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment
NASA Astrophysics Data System (ADS)
Weed, Jonathan Robert
The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
Quantitative Imaging with a Mobile Phone Microscope
Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.
2014-01-01
Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072
640x480 PtSi Stirling-cooled camera system
NASA Astrophysics Data System (ADS)
Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.
1992-09-01
A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.
Evaluation of the MSFC facsimile camera system as a tool for extraterrestrial geologic exploration
NASA Technical Reports Server (NTRS)
Wolfe, E. W.; Alderman, J. D.
1971-01-01
Utility of the Marshall Space Flight (MSFC) facsimile camera system for extraterrestrial geologic exploration was investigated during the spring of 1971 near Merriam Crater in northern Arizona. Although the system with its present hard-wired recorder operates erratically, the imagery showed that the camera could be developed as a prime imaging tool for automated missions. Its utility would be enhanced by development of computer techniques that utilize digital camera output for construction of topographic maps, and it needs increased resolution for examining near field details. A supplementary imaging system may be necessary for hand specimen examination at low magnification.
The mosaics of Mars: As seen by the Viking Lander cameras
NASA Technical Reports Server (NTRS)
Levinthal, E. C.; Jones, K. L.
1980-01-01
The mosaics and derivative products produced from many individual high resolution images acquired by the Viking Lander Camera Systems are described: A morning and afternoon mosaic for both cameras at the Lander 1 Chryse Planitia site, and a morning, noon, and afternoon camera pair at Utopia Planitia, the Lander 11 site. The derived products include special geometric projections of the mosaic data sets, polar stereographic (donut), stereoscopic, and orthographic. Contour maps and vertical profiles of the topography were overlaid on the mosaics from which they were derived. Sets of stereo pairs were extracted and enlarged from stereoscopic projections of the mosaics.
Astronaut Kathryn Thornton on HST photographed by Electronic Still Camera
1993-12-05
S61-E-011 (5 Dec 1993) --- This view of astronaut Kathryn C. Thornton working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is installing the +V2 Solar Array Panel as a replacement for the original one removed earlier. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
VizieR Online Data Catalog: The Hα rotation curve of M33 (Kam+, 2015)
NASA Astrophysics Data System (ADS)
Kam, Z. S.; Carignan, C.; Chemin, L.; Amram, P.; Epinat, B.
2017-11-01
This study presents Fabry-Perot mapping of M33 obtained at the Observatoire du mont Megantic (OMM). Relano et al. (2013, J/A+A/552/A140) have studied the spectral energy distribution of the H II regions of M33 and the star formation rate and star formation efficiency have been investigated by Gardan et al. (2007A&A...473...91G) and Kramer et al. (2011EAS....52..107K). More than 1000 H II regions can be resolved by the Hα observations; Courtes et al. (1987A&A...174...28C) gave a catalogue of 748 H II regions. Observing those regions allows us to derive the ionized gas (optical) kinematics of M33. Determination of the M33 kinematics with a spatial resolution ~<3 arcsec (~12 pc) using the H{spectra} velocity field and the derivation of an accurate RC in the inner parts are the main goals of this paper. The observations took place at the 1.6-m telescope of the OMM, Quebec, in 2012 September. A scanning FP etalon interferometer has been used during the observations with the device IXON888, a commercial Andor EM-CCD camera of 1024 pixelsx1024 pixels. (1 data file).
Selby, R D; Gage, S H; Whalon, M E
2014-04-01
Incorporating camera systems into insect traps potentially benefits insect phenology modeling, nonlethal insect monitoring, and research into the automated identification of traps counts. Cameras originally for monitoring mammals were instead adapted to monitor the entrance to pyramid traps designed to capture the plum curculio, Conotrachelus nenuphar (Herbst) (Coleoptera: Curculionidae). Using released curculios, two new trap designs (v.I and v.II) were field-tested alongside conventional pyramid traps at one site in autumn 2010 and at four sites in autumn 2012. The traps were evaluated on the basis of battery power, ease-of-maintenance, adaptability, required-user-skills, cost (including labor), and accuracy-of-results. The v.II design fully surpassed expectations, except that some trapped curculios were not photographed. In 2012, 13 of the 24 traps recorded every curculio entering the traps during the 18-d study period, and in traps where some curculios were not photographed, over 90% of the omissions could be explained by component failure or external interference with the motion sensor. Significantly more curculios entered the camera traps between 1800 and 0000 hours. When compared with conventional pyramid traps, the v.I traps collected a similar number of curculios. Two observed but not significant trends were that the v.I traps collected twice as many plum curculios as the v.II traps, while at the same time the v.II traps collected more than twice as many photos per plum curculio as the v.I traps. The research demonstrates that low-cost, precise monitoring of field insect populations is feasible without requiring extensive technical expertise.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
NASA Astrophysics Data System (ADS)
Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.
2017-12-01
Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
High speed, real-time, camera bandwidth converter
Bower, Dan E; Bloom, David A; Curry, James R
2014-10-21
Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.
Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program
NASA Astrophysics Data System (ADS)
Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier
2005-04-01
Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).
"Stereo Compton cameras" for the 3-D localization of radioisotopes
NASA Astrophysics Data System (ADS)
Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.
2014-11-01
The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
Investigation into the use of photoanthropometry in facial image comparison.
Moreton, Reuben; Morley, Johanna
2011-10-10
Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Agostini, Denis; Marie, Pierre-Yves; Ben-Haim, Simona; Rouzet, François; Songy, Bernard; Giordano, Alessandro; Gimelli, Alessia; Hyafil, Fabien; Sciagrà, Roberto; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Übleis, Christopher; Hacker, Marcus
2016-12-01
The trade-off between resolution and count sensitivity dominates the performance of standard gamma cameras and dictates the need for relatively high doses of radioactivity of the used radiopharmaceuticals in order to limit image acquisition duration. The introduction of cadmium-zinc-telluride (CZT)-based cameras may overcome some of the limitations against conventional gamma cameras. CZT cameras used for the evaluation of myocardial perfusion have been shown to have a higher count sensitivity compared to conventional single photon emission computed tomography (SPECT) techniques. CZT image quality is further improved by the development of a dedicated three-dimensional iterative reconstruction algorithm, based on maximum likelihood expectation maximization (MLEM), which corrects for the loss in spatial resolution due to line response function of the collimator. All these innovations significantly reduce imaging time and result in a lower patient's radiation exposure compared with standard SPECT. To guide current and possible future users of the CZT technique for myocardial perfusion imaging, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, has decided to examine the current literature regarding procedures and clinical data on CZT cameras. The committee hereby aims 1) to identify the main acquisitions protocols; 2) to evaluate the diagnostic and prognostic value of CZT derived myocardial perfusion, and finally 3) to determine the impact of CZT on radiation exposure.
Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted
2012-12-01
We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.
Raspberry Pi camera with intervalometer used as crescograph
NASA Astrophysics Data System (ADS)
Albert, Stefan; Surducan, Vasile
2017-12-01
The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.
Compton camera study for high efficiency SPECT and benchmark with Anger system
NASA Astrophysics Data System (ADS)
Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.
2017-12-01
Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application for SPECT if compared to present commercial Anger systems, with particular focus on dose delivered to the patient, examination time, and spatial uncertainties.
SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darne, C; Robertson, D; Alsanea, F
2016-06-15
Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less
The LST scientific instruments
NASA Technical Reports Server (NTRS)
Levin, G. M.
1975-01-01
Seven scientific instruments are presently being studied for use with the Large Space Telescope (LST). These instruments are the F/24 Field Camera, the F/48-F/96 Planetary Camera, the High Resolution Spectrograph, the Faint Object Spectrograph, the Infrared Photometer, and the Astrometer. These instruments are being designed as facility instruments to be replaceable during the life of the Observatory.
Lightweight Electronic Camera for Research on Clouds
NASA Technical Reports Server (NTRS)
Lawson, Paul
2006-01-01
"Micro-CPI" (wherein "CPI" signifies "cloud-particle imager") is the name of a small, lightweight electronic camera that has been proposed for use in research on clouds. It would acquire and digitize high-resolution (3- m-pixel) images of ice particles and water drops at a rate up to 1,000 particles (and/or drops) per second.
TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope
NASA Astrophysics Data System (ADS)
Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.
Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.
NASA Astrophysics Data System (ADS)
Buongiorno, M. F.; Musacchio, M.; Silvestri, M.; Vilardo, G.; Sansivero, F.; caPUTO, T.; bellucci Sessa, E.; Pieri, D. C.
2017-12-01
Current satellite missions providing imagery in the TIR region at high spatial resolution offer the possibility to estimate the surface temperature in volcanic area contributing in understanding the ongoing phenomena to mitigate the volcanic risk when population are exposed. The Campi Flegrei volcanic area (Italy) is part of the Napolitan volcanic district and its monitored by INGV ground networks including thermal cameras. TIRS on LANDSAT and ASTER on NASA-TERRA provide thermal IR channels to monitor the evolution of the surface temperatures on Campi Flegrei area. The spatial resolution of the TIR data is 100 m for LANDSAT8 and 90 m for ASTER, temporal resolution is 16 days for both satellites. TIRNet network has been developed by INGV for long-term volcanic surveillance of the Flegrei Fields through the acquisition of thermal infrared images. The system is currently comprised of 5 permanent stations equipped with FLIR A645SC thermo cameras with a 640x480 resolution IR sensor. To improve the systematic use of satellite data in the monitor procedures of Volcanic Observatories a suitable integration and validation strategy is needed, also considering that current satellite missions do not provide TIR data with optimal characteristics to observe small thermal anomalies that may indicate changes in the volcanic activity. The presented procedure has been applied to the analysis of Solfatara Crater and is based on 2 different steps: 1) parallel processing chains to produce ground temperature data both from satellite and ground cameras; 2) data integration and comparison. The ground cameras images generally correspond to views of portion of the crater slopes characterized by significant thermal anomalies due to fumarole fields. In order to compare the satellite and ground cameras it has been necessary to take into account the observation geometries. All thermal images of the TIRNet have been georeferenced to the UTM WGS84 system, a regular grid of 30x30 meters has been created to select polygonal areas corresponding only to the cells containing the georeferenced TIR images acquired by different TIRnet stations. Preliminary results of this integration approach has been analyzed in order to produce systematic reports to the Italian Civil Protection for the Napolitan Volcanoes.
Webb, Donna J.; Brown, Claire M.
2012-01-01
Epi-fluorescence microscopy is available in most life sciences research laboratories, and when optimized can be a central laboratory tool. In this chapter, the epi-fluorescence light path is introduced and the various components are discussed in detail. Recommendations are made for incident lamp light sources, excitation and emission filters, dichroic mirrors, objective lenses, and charge-coupled device (CCD) cameras in order to obtain the most sensitive epi-fluorescence microscope. The even illumination of metal-halide lamps combined with new “hard” coated filters and mirrors, a high resolution monochrome CCD camera, and a high NA objective lens are all recommended for high resolution and high sensitivity fluorescence imaging. Recommendations are also made for multicolor imaging with the use of monochrome cameras, motorized filter turrets, individual filter cubes, and corresponding dyes that are the best choice for sensitive, high resolution multicolor imaging. Images should be collected using Nyquist sampling and should be corrected for background intensity contributions and nonuniform illumination across the field of view. Photostable fluorescent probes and proteins that absorb a lot of light (i.e., high extinction co-efficients) and generate a lot of fluorescence signal (i.e., high quantum yields) are optimal. A neuronal immune-fluorescence labeling protocol is also presented. Finally, in order to maximize the utility of sensitive wide-field microscopes and generate the highest resolution images with high signal-to-noise, advice for combining wide-field epi-fluorescence imaging with restorative image deconvolution is presented. PMID:23026996
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Namba, Masakazu; Watabe, Toshihisa; Ohtake, Hiroshi; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Nitta, Hiroshi; Hirao, Takashi
2010-01-01
Our group has been developing a new type of image sensor overlaid with three organic photoconductive films, which are individually sensitive to only one of the primary color components (blue (B), green (G), or red (R) light), with the aim of developing a compact, high resolution color camera without any color separation optical systems. In this paper, we firstly revealed the unique characteristics of organic photoconductive films. Only choosing organic materials can tune the photoconductive properties of the film, especially excellent wavelength selectivities which are good enough to divide the incident light into three primary colors. Color separation with vertically stacked organic films was also shown. In addition, the high-resolution of organic photoconductive films sufficient for high-definition television (HDTV) was confirmed in a shooting experiment using a camera tube. Secondly, as a step toward our goal, we fabricated a stacked organic image sensor with G- and R-sensitive organic photoconductive films, each of which had a zinc oxide (ZnO) thin film transistor (TFT) readout circuit, and demonstrated image pickup at a TV frame rate. A color image with a resolution corresponding to the pixel number of the ZnO TFT readout circuit was obtained from the stacked image sensor. These results show the potential for the development of high-resolution prism-less color cameras with stacked organic photoconductive films.
NASA Technical Reports Server (NTRS)
Tarbell, T.; Frank, Z.; Gilbreth, C.; Shine, R.; Title, A.; Topka, K.; Wolfson, J.
1989-01-01
SOUP is a versatile, visible-light solar observatory, built for space or balloon flight. It is designed to study magnetic and velocity fields in the solar atmosphere with high spatial resolution and temporal uniformity, which cannot be achieved from the surface of the earth. The SOUP investigation is carried out by the Lockheed Palo Alto Research Laboratory, under contract to NASA's Marshall Space Flight Center. Co-investigators include staff members at a dozen observatories and universities in the U.S. and Europe. The primary objectives of the SOUP experiment are: to measure vector magnetic and velocity fields in the solar atmosphere with much better spatial resolution than can be achieved from the ground; to study the physical processes that store magnetic energy in active regions and the conditions that trigger its release; and to understand how magnetic flux emerges, evolves, combines, and disappears on spatial scales of 400 to 100,000 km. SOUP is designed to study intensity, magnetic, and velocity fields in the photosphere and low chromosphere with 0.5 arcsec resolution, free of atmospheric disturbances. The instrument includes: a 30 cm Cassegrain telescope; an active mirror for image stabilization; broadband film and TV cameras; a birefringent filter, tunable over 5100 to 6600 A with 0.05 A bandpass; a 35 mm film camera and a digital CCD camera behind the filter; and a high-speed digital image processor.
NASA Astrophysics Data System (ADS)
Tarbell, T.; Frank, Z.; Gilbreth, C.; Shine, R.; Title, A.; Topka, K.; Wolfson, J.
SOUP is a versatile, visible-light solar observatory, built for space or balloon flight. It is designed to study magnetic and velocity fields in the solar atmosphere with high spatial resolution and temporal uniformity, which cannot be achieved from the surface of the earth. The SOUP investigation is carried out by the Lockheed Palo Alto Research Laboratory, under contract to NASA's Marshall Space Flight Center. Co-investigators include staff members at a dozen observatories and universities in the U.S. and Europe. The primary objectives of the SOUP experiment are: to measure vector magnetic and velocity fields in the solar atmosphere with much better spatial resolution than can be achieved from the ground; to study the physical processes that store magnetic energy in active regions and the conditions that trigger its release; and to understand how magnetic flux emerges, evolves, combines, and disappears on spatial scales of 400 to 100,000 km. SOUP is designed to study intensity, magnetic, and velocity fields in the photosphere and low chromosphere with 0.5 arcsec resolution, free of atmospheric disturbances. The instrument includes: a 30 cm Cassegrain telescope; an active mirror for image stabilization; broadband film and TV cameras; a birefringent filter, tunable over 5100 to 6600 A with 0.05 A bandpass; a 35 mm film camera and a digital CCD camera behind the filter; and a high-speed digital image processor.
The emplacement of long lava flows in Mare Imbrium, the Moon
NASA Astrophysics Data System (ADS)
Garry, W. B.
2012-12-01
Lava flow margins are scarce on the lunar surface. The best developed lava flows on the Moon occur in Mare Imbrium where flow margins are traceable nearly their entire flow length. The flow field originates in the southwest part of the basin from a fissure or series of fissures and cones located in the vicinity of Euler crater and erupted in three phases (Phases I, II, III) over a period of 0.5 Billion years (3.0 - 2.5 Ga). The flow field was originally mapped with Apollo and Lunar Orbiter data by Schaber (1973) and shows the flow field extends 200 to 1200 km from the presumed source area and covers an area of 2.0 x 10^5 km^2 with an estimated eruptive volume of 4 x 10^4 km^3. Phase I flows extend 1200 km and have the largest flow volume, but interestingly do not exhibit visible topography and are instead defined by difference in color from the surrounding mare flows. Phases II and III flows have well-defined flow margins (10 - 65 m thick) and channels (0.4 - 2.0 km wide, 40 - 70 m deep), but shorter flow lengths, 600 km and 400 km respectively. Recent missions, including Lunar Reconnaissance Orbiter (LRO), Kaguya (Selene), and Clementine, provide high resolution data sets of these lava flows. Using a combination of data sets including images from LRO Wide-Angle-Camera (WAC)(50-100 m/pixel) and Narrow-Angle-Camera (NAC) (up to 0.5m/pixel), Kaguya Terrain Camera (TC) (10 m/pixel), and topography from LRO Lunar Orbiter Laser Altimeter (LOLA), the morphology has been remapped and topographic measurements of the flow features have been made in an effort to reevaluate the emplacement of the flow field. Morphologic mapping reveals a different flow path for Phase I compared to the original mapping completed by Schaber (1973). The boundaries of the Phase I flow field have been revised based on Moon Mineralogy Mapper color ratio images (Staid et al., 2011). This has implications for the area covered and volume erupted during this stage, as well as, the age of Phase I. Flow features and margins have been identified in the Phase I flow within the LROC WAC mosaic and in Narrow Angle Camera (NAC) images. These areas have a mottled appearance. LOLA profiles over the more prominent flow lobes in Phase I reveal these margins are less 10 m thick. Phase II and III morphology maps are similar to previous flow maps. Phase III lobes near Euler are 10-12 km wide and 20-30 m thick based on measurements of the LOLA 1024ppd Elevation Digital Terrain Model (DTM) in JMoon. One of the longer Phase III lobes varies between 15 to 50 km wide and 25 to 60 m thick, with the thickest section at the distal end of the lobe. The Phase II lobe is 15 to 25 m thick and up to 35 km wide. The eruptive volume of the Mare Imbrium lava flows has been compared to terrestrial flood basalts. The morphology of the lobes in Phase II and III, which includes levees, thick flow fronts, and lobate margins suggests these could be similar to terrestrial aa-style flows. The Phase I flows might be more representative of sheet flows, pahoehoe-style flows, or inflated flows. Morphologic comparisons will be made with terrestrial flows at Askja volcano in Iceland, a potential analog to compare different styles of emplacement for the flows in Mare Imbrium.
NASA Technical Reports Server (NTRS)
Voellmer, George M.; Jackson, Michael L.; Shirron, Peter J.; Tuttle, James G.
2002-01-01
The High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter And Far Infrared Experiment (SAFIRE) will use identical Adiabatic Demagnetization Refrigerators (ADR) to cool their detectors to 200mK and 100mK, respectively. In order to minimize thermal loads on the salt pill, a Kevlar suspension system is used to hold it in place. An innovative, kinematic suspension system is presented. The suspension system is unique in that it consists of two parts that can be assembled and tensioned offline, and later bolted onto the salt pill.
MacPhee, A. G.; Dymoke-Bradshaw, A. K. L.; Hares, J. D.; ...
2016-08-08
Here, we report simulationsand experiments that demonstrate an increasein spatial resolution ofthe NIF core diagnostic x-ray streak camerasby a factor of two, especially off axis. A designwas achieved by usinga corrector electron optic to flatten the field curvature at the detector planeand corroborated by measurement. In addition, particle in cell simulations were performed to identify theregions in the streak camera that contribute most to space charge blurring. Our simulations provide a tool for convolving syntheticpre-shot spectra with the instrument functionso signal levels can be set to maximize dynamic range for the relevant part of the streak record.
High-resolution continuum observations of the Sun
NASA Technical Reports Server (NTRS)
Zirin, Harold
1987-01-01
The aim of the PFI or photometric filtergraph instrument is to observe the Sun in the continuum with as high resolution as possible and utilizing the widest range of wavelengths. Because of financial and political problems the CCD was eliminated so that the highest photometric accuracy is only obtainable by comparison with the CFS images. Presently there is a limitation to wavelengths above 2200 A due to the lack of sensitivity of untreated film below 2200 A. Therefore the experiment at present consists of a film camera with 1000 feet of film and 12 filters. The PFI experiments are outlined using only two cameras. Some further problems of the experiment are addressed.
NASA Astrophysics Data System (ADS)
Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.
2016-03-01
Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
Design and performance of a large area neutron sensitive anger camera
Visscher, Theodore; Montcalm, Christopher A.; Donahue, Jr., Cornelius; ...
2015-05-21
We describe the design and performance of a 157mm x 157mm two dimensional neutron detector. The detector uses the Anger principle to determine the position of neutrons. We have verified FWHM resolution of < 1.2mm with distortion < 0.5mm on over 50 installed Anger Cameras. The performance of the detector is limited by the light yield of the scintillator, and it is estimated that the resolution of the current detector could be doubled with a brighter scintillator. Data collected from small (<1mm 3) single crystal reference samples at the single crystal instrument TOPAZ provide results with low R w(F) values
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
NASA Technical Reports Server (NTRS)
Barnes, J. C. (Principal Investigator); Smallwood, M. D.; Cogan, J. L.
1975-01-01
The author has identified the following significant results. Of the four black and white S190A camera stations, snowcover is best defined in the two visible spectral bands, due in part to their better resolution. The overall extent of the snow can be mapped more precisely, and the snow within shadow areas is better defined in the visible bands. Of the two S190A color products, the aerial color photography is the better. Because of the contrast in color between snow and snow-free terrain and the better resolution, this product is concluded to be the best overall of the six camera stations for detecting and mapping snow. Overlapping frames permit stereo viewing, which aids in distinguishing clouds from the underlying snow. Because of the greater spatial resolution of the S190B earth terrain camera, areal snow extent can be mapped in greater detail than from the S190A photographs. The snow line elevation measured from the S190A and S190B photographs is reasonable compared to the meager ground truth data available.
2005-06-20
One of the two pictures of Tempel 1 (see also PIA02101) taken by Deep Impact's medium-resolution camera is shown next to data of the comet taken by the spacecraft's infrared spectrometer. This instrument breaks apart light like a prism to reveal the "fingerprints," or signatures, of chemicals. Even though the spacecraft was over 10 days away from the comet when these data were acquired, it detected some of the molecules making up the comet's gas and dust envelope, or coma. The signatures of these molecules -- including water, hydrocarbons, carbon dioxide and carbon monoxide -- can be seen in the graph, or spectrum. Deep Impact's impactor spacecraft is scheduled to collide with Tempel 1 at 10:52 p.m. Pacific time on July 3 (1:52 a.m. Eastern time, July 4). The mission's flyby spacecraft will use its infrared spectrometer to sample the ejected material, providing the first look at the chemical composition of a comet's nucleus. These data were acquired from June 20 to 21, 2005. The picture of Tempel 1 was taken by the flyby spacecraft's medium-resolution instrument camera. The infrared spectrometer uses the same telescope as the high-resolution instrument camera. http://photojournal.jpl.nasa.gov/catalog/PIA02100
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
High resolution multispectral photogrammetric imagery: enhancement, interpretation and evaluations
NASA Astrophysics Data System (ADS)
Roberts, Arthur; Haefele, Martin; Bostater, Charles; Becker, Thomas
2007-10-01
A variety of aerial mapping cameras were adapted and developed into simulated multiband digital photogrammetric mapping systems. Direct digital multispectral, two multiband cameras (IIS 4 band and Itek 9 band) and paired mapping and reconnaissance cameras were evaluated for digital spectral performance and photogrammetric mapping accuracy in an aquatic environment. Aerial films (24cm X 24cm format) tested were: Agfa color negative and extended red (visible and near infrared) panchromatic, and; Kodak color infrared and B&W (visible and near infrared) infrared. All films were negative processed to published standards and digitally converted at either 16 (color) or 10 (B&W) microns. Excellent precision in the digital conversions was obtained with scanning errors of less than one micron. Radiometric data conversion was undertaken using linear density conversion and centered 8 bit histogram exposure. This resulted in multiple 8 bit spectral image bands that were unaltered (not radiometrically enhanced) "optical count" conversions of film density. This provided the best film density conversion to a digital product while retaining the original film density characteristics. Data covering water depth, water quality, surface roughness, and bottom substrate were acquired using different measurement techniques as well as different techniques to locate sampling points on the imagery. Despite extensive efforts to obtain accurate ground truth data location errors, measurement errors, and variations in the correlation between water depth and remotely sensed signal persisted. These errors must be considered endemic and may not be removed through even the most elaborate sampling set up. Results indicate that multispectral photogrammetric systems offer improved feature mapping capability.
Diving-flight aerodynamics of a peregrine falcon (Falco peregrinus).
Ponitz, Benjamin; Schmitz, Anke; Fischer, Dominik; Bleckmann, Horst; Brücker, Christoph
2014-01-01
This study investigates the aerodynamics of the falcon Falco peregrinus while diving. During a dive peregrines can reach velocities of more than 320 km h⁻¹. Unfortunately, in freely roaming falcons, these high velocities prohibit a precise determination of flight parameters such as velocity and acceleration as well as body shape and wing contour. Therefore, individual F. peregrinus were trained to dive in front of a vertical dam with a height of 60 m. The presence of a well-defined background allowed us to reconstruct the flight path and the body shape of the falcon during certain flight phases. Flight trajectories were obtained with a stereo high-speed camera system. In addition, body images of the falcon were taken from two perspectives with a high-resolution digital camera. The dam allowed us to match the high-resolution images obtained from the digital camera with the corresponding images taken with the high-speed cameras. Using these data we built a life-size model of F. peregrinus and used it to measure the drag and lift forces in a wind-tunnel. We compared these forces acting on the model with the data obtained from the 3-D flight path trajectory of the diving F. peregrinus. Visualizations of the flow in the wind-tunnel uncovered details of the flow structure around the falcon's body, which suggests local regions with separation of flow. High-resolution pictures of the diving peregrine indicate that feathers pop-up in the equivalent regions, where flow separation in the model falcon occurred.
Slight Blurring in Newer Image from Mars Orbiter
2018-02-09
These two frames were taken of the same place on Mars by the same orbiting camera before (left) and after some images from the camera began showing unexpected blur. The images are from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. They show a patch of ground about 500 feet or 150 meters wide in Gusev Crater. The one on the left, from HiRISE observation ESP_045173_1645, was taken March 16, 2016. The one on the right was taken Jan. 9, 2018. Gusev Crater, at 15 degrees south latitude and 176 degrees east longitude, is the landing site of NASA's Spirit Mars rover in 2004 and a candidate landing site for a rover to be launched in 2020. HiRISE images provide important information for evaluating potential landing sites. The smallest boulders with measurable diameters in the left image are about 3 feet (90 centimeters) wide. In the blurred image, the smallest measurable are about double that width. As of early 2018, most full-resolution images from HiRISE are not blurred, and the cause of the blur is still under investigation. Even before blurred images were first seen, in 2017, observations with HiRISE commonly used a technique that covers more ground area at half the resolution. This shows features smaller than can be distinguished with any other camera orbiting Mars, and little blurring has appeared in these images. https://photojournal.jpl.nasa.gov/catalog/PIA22215
NASA Astrophysics Data System (ADS)
Guhathakurta, Puragra; Dorman, C.; Seth, A.; Dalcanton, J.; Gilbert, K.; Howley, K.; Johnson, L. C.; Kalirai, J.; Krause, T.; Lang, D.; Williams, B.; PHAT Team; SPLASH Collaboration
2012-01-01
We present a comparative study of the kinematics of different types of stars in the Andromeda galaxy (M31). Our fields of study span a range of projected radii from 2 to 15 kpc in the NE and SE quadrants of M31's disk and spheroid. The kinematical part of this study is based on radial velocity measurements of a few thousand stars obtained using the Keck II telescope and DEIMOS spectrograph as part of the SPLASH survey. The DEIMOS spectra have a spectral resolution of about 1.5 Angstrom (FWHM) and cover the wavelength range 6500-9000 Angstrom. The stellar populations part of this study - specifically, the division of stars into sub-populations - is based on high spatial resolution Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) and Wide-Field Camera 3 (WFC3) images and photometry in six filters: two ultraviolet bands (F275W and F336W), two optical bands (F475W and F814W), and two near-infrared bands (F110W and F160W). The stellar sub-populations we study include metal-rich, metal-intermediate, and metal-poor red giants, asymptotic giant branch stars, He-burning blue loop stars, massive main sequence stars, planetary nebulae, and X-ray binaries. Kinematical information allows us to measure the fraction of each sub-population that is associated with M31's disk versus its spheroid. The excellent synergy between HST and Keck provides insight into the relationship between the dynamical, star formation, and chemical enrichment histories of the structural sub-components of M31 and, by association, other large spiral galaxies. This research was supported by the National Science Foundation, NASA, and the Science Internship Program (SIP) at UCSC.
Theory of compressive modeling and simulation
NASA Astrophysics Data System (ADS)
Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith
2013-05-01
Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .
Tracking subpixel targets in domestic environments
NASA Astrophysics Data System (ADS)
Govinda, V.; Ralph, J. F.; Spencer, J. W.; Goulermas, J. Y.; Smith, D. H.
2006-05-01
In recent years, closed circuit cameras have become a common feature of urban life. There are environments however where the movement of people needs to be monitored but high resolution imaging is not necessarily desirable: rooms where privacy is required and the occupants are not comfortable with the perceived intrusion. Examples might include domiciliary care environments, prisons and other secure facilities, and even large open plan offices. This paper discusses algorithms that allow activity within this type of sensitive environment to be monitored using data from low resolution cameras (ones where all objects of interest are sub-pixel and cannot be resolved) and other non-intrusive sensors. The algorithms are based on techniques originally developed for wide area reconnaissance and surveillance applications. Of particular importance is determining the minimum spatial resolution that is required to provide a specific level of coverage and reliability.
Thin-filament pyrometry with a digital still camera.
Maun, Jignesh D; Sunderland, Peter B; Urban, David L
2007-02-01
A novel thin-filament pyrometer is presented. It involves a consumer-grade color digital still camera with 6 megapixels and 12 bits per color plane. SiC fibers were used and scanning-electron microscopy found them to be uniform with diameters of 13.9 micro m. Measurements were performed in a methane-air coflowing laminar jet diffusion flame with a luminosity length of 72 mm. Calibration of the pyrometer was accomplished with B-type thermocouples. The pyrometry measurements yielded gas temperatures in the range of 1400-2200 K with an estimated uncertainty of +/-60 K, a relative temperature resolution of +/-0.215 K, a spatial resolution of 42 mum, and a temporal resolution of 0.66 ms. Fiber aging for 10 min had no effect on the results. Soot deposition was less problematic for the pyrometer than for the thermocouple.
Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT
NASA Astrophysics Data System (ADS)
Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.
2009-01-01
We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.
Integration of USB and firewire cameras in machine vision applications
NASA Astrophysics Data System (ADS)
Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard
1999-08-01
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
NASA Astrophysics Data System (ADS)
Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.
2012-12-01
Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent/divergent ice zones, (ii) provide datasets that support enhanced parameterizations in numerical models as well as model initialization and validation, (iii) parameters of interest to Arctic stakeholders for marine navigation and ice engineering studies, and (iv) statistics that support algorithm development for the next-generation of airborne and satellite altimeters, including NASA's ICESat-2 mission. We describe the potential contribution our results can make towards the improvement of coupled ice-ocean numerical models, and discuss how data synthesis and integration with high-resolution models may improve our understanding of sea ice variability and our capabilities in predicting the future state of the ice pack.
25 CFR 543.2 - What are the definitions for this part?
Code of Federal Regulations, 2013 CFR
2013-04-01
..., mechanical, or other technologic form, that function together to aid the play of one or more Class II games... a particular game, player interface, shift, or other period. Count room. A secured room where the... validated directly by a voucher system. Dedicated camera. A video camera that continuously records a...
Study on a High Compression Processing for Video-on-Demand e-learning System
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.
CIRCE: The Canarias InfraRed Camera Experiment for the Gran Telescopio Canarias
NASA Astrophysics Data System (ADS)
Eikenberry, Stephen S.; Charcos, Miguel; Edwards, Michelle L.; Garner, Alan; Lasso-Cabrera, Nestor; Stelter, Richard D.; Marin-Franch, Antonio; Raines, S. Nicholas; Ackley, Kendall; Bennett, John G.; Cenarro, Javier A.; Chinn, Brian; Donoso, H. Veronica; Frommeyer, Raymond; Hanna, Kevin; Herlevich, Michael D.; Julian, Jeff; Miller, Paola; Mullin, Scott; Murphey, Charles H.; Packham, Chris; Varosi, Frank; Vega, Claudia; Warner, Craig; Ramaprakash, A. N.; Burse, Mahesh; Punnadi, Sunjit; Chordia, Pravin; Gerarts, Andreas; Martín, Héctor De Paz; Calero, María Martín; Scarpa, Riccardo; Acosta, Sergio Fernandez; Sánchez, William Miguel Hernández; Siegel, Benjamin; Pérez, Francisco Francisco; Martín, Himar D. Viera; Losada, José A. Rodríguez; Nuñez, Agustín; Tejero, Álvaro; González, Carlos E. Martín; Rodríguez, César Cabrera; Sendra, Jordi Molgó; Rodriguez, J. Esteban; Cáceres, J. Israel Fernádez; García, Luis A. Rodríguez; Lopez, Manuel Huertas; Dominguez, Raul; Gaggstatter, Tim; Lavers, Antonio Cabrera; Geier, Stefan; Pessev, Peter; Sarajedini, Ata; Castro-Tirado, A. J.
The Canarias InfraRed Camera Experiment (CIRCE) is a near-infrared (1-2.5μm) imager, polarimeter and low-resolution spectrograph operating as a visitor instrument for the Gran Telescopio Canarias (GTC) 10.4-m telescope. It was designed and built largely by graduate students and postdocs, with help from the University of Florida (UF) astronomy engineering group, and is funded by the UF and the US National Science Foundation. CIRCE is intended to help fill the gap in near-infrared capabilities prior to the arrival of Especrografo Multiobjecto Infra-Rojo (EMIR) to the GTC and will also provide the following scientific capabilities to compliment EMIR after its arrival: high-resolution imaging, narrowband imaging, high-time-resolution photometry, imaging polarimetry, and low resolution spectroscopy. In this paper, we review the design, fabrication, integration, lab testing, and on-sky performance results for CIRCE. These include a novel approach to the opto-mechanical design, fabrication, and alignment.
Performance measurement of commercial electronic still picture cameras
NASA Astrophysics Data System (ADS)
Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te
1998-06-01
Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.
Manned observations technology development, FY 1992 report
NASA Technical Reports Server (NTRS)
Israel, Steven
1992-01-01
This project evaluated the suitability of the NASA/JSC developed electronic still camera (ESC) digital image data for Earth observations from the Space Shuttle, as a first step to aid planning for Space Station Freedom. Specifically, image resolution achieved from the Space Shuttle using the current ESC system, which is configured with a Loral 15 mm x 15 mm (1024 x 1024 pixel array) CCD chip on the focal plane of a Nikon F4 camera, was compared to that of current handheld 70 mm Hasselblad 500 EL/M film cameras.
Imaging experiment: The Viking Lander
Mutch, T.A.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Morris, E.C.; Sagan, C.; Young, A.T.
1972-01-01
The Viking Lander Imaging System will consist of two identical facsimile cameras. Each camera has a high-resolution mode with an instantaneous field of view of 0.04??, and survey and color modes with instantaneous fields of view of 0.12??. Cameras are positioned one meter apart to provide stereoscopic coverage of the near-field. The Imaging Experiment will provide important information about the morphology, composition, and origin of the Martian surface and atmospheric features. In addition, lander pictures will provide supporting information for other experiments in biology, organic chemistry, meteorology, and physical properties. ?? 1972.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
Re-scan confocal microscopy: scanning twice for better resolution.
De Luca, Giulia M R; Breedijk, Ronald M P; Brandt, Rick A J; Zeelenberg, Christiaan H C; de Jong, Babette E; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A; Stallinga, Sjoerd; Manders, Erik M M
2013-01-01
We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required.
Measuring the performance of super-resolution reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet
2012-06-01
For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Kirk, R.L.; Howington-Kraus, E.; Redding, B.; Galuszka, D.; Hare, T.M.; Archinal, B.A.; Soderblom, L.A.; Barrett, J.M.
2003-01-01
We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and can be used with all present and planned Mars-orbiting scanner cameras. Photogrammetric analysis of MOC stereopairs yields DEMs with 3-pixel (typically 10 m) horizontal resolution, vertical precision consistent with ???0.22 pixel matching errors (typically a few meters), and slope errors of 1-3??. These DEMs are controlled to the Mars Orbiter Laser Altimeter (MOLA) global data set and consistent with it at the limits of resolution. Photoclinometry yields DEMs with single-pixel (typically ???3 m) horizontal resolution and submeter vertical precision. Where the surface albedo is uniform, the dominant error is 10-20% relative uncertainty in the amplitude of topography and slopes after "calibrating" photoclinometry against a stereo DEM to account for the influence of atmospheric haze. We mapped portions of seven candidate MER sites and the Mars Pathfinder site. Safety of the final four sites (Elysium, Gusev, Isidis, and Meridiani) was assessed by mission engineers by simulating landings on our DEMs of "hazard units" mapped in the sites, with results weighted by the probability of landing on those units; summary slope statistics show that most hazard units are smooth, with only small areas of etched terrain in Gusev crater posing a slope hazard.
Replogle, William C.; Sweatt, William C.
2001-01-01
A photolithography system that employs a condenser that includes a series of aspheric mirrors on one side of a small, incoherent source of radiation producing a series of beams is provided. Each aspheric mirror images the quasi point source into a curved line segment. A relatively small arc of the ring image is needed by the camera; all of the beams are so manipulated that they all fall onto this same arc needed by the camera. Also, all of the beams are aimed through the camera's virtual entrance pupil. The condenser includes a correcting mirror for reshaping a beam segment which improves the overall system efficiency. The condenser efficiently fills the larger radius ringfield created by today's advanced camera designs. The system further includes (i) means for adjusting the intensity profile at the camera's entrance pupil or (ii) means for partially shielding the illumination imaging onto the mask or wafer. The adjusting means can, for example, change at least one of: (i) partial coherence of the photolithography system, (ii) mask image illumination uniformity on the wafer or (iii) centroid position of the illumination flux in the entrance pupil. A particularly preferred adjusting means includes at least one vignetting mask that covers at least a portion of the at least two substantially equal radial segments of the parent aspheric mirror.
Huber, A; Brezinsek, S; Mertens, Ph; Schweer, B; Sergienko, G; Terra, A; Arnoux, G; Balshaw, N; Clever, M; Edlingdon, T; Egner, S; Farthing, J; Hartl, M; Horton, L; Kampf, D; Klammer, J; Lambertz, H T; Matthews, G F; Morlock, C; Murari, A; Reindl, M; Riccardo, V; Samm, U; Sanders, S; Stamp, M; Williams, J; Zastrow, K D; Zauner, C
2012-10-01
A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II, C I, C II, C III) with high optical transmittance (≥ 30% in the designed wavelength range) as well as high spatial resolution that is ≤ 2 mm at the object plane and ≤ 3 mm for the full depth of field (± 0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the λ > 0.95 μm range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.
MeV gamma-ray observation with a well-defined point spread function based on electron tracking
NASA Astrophysics Data System (ADS)
Takada, A.; Tanimori, T.; Kubo, H.; Mizumoto, T.; Mizumura, Y.; Komura, S.; Kishimoto, T.; Takemura, T.; Yoshikawa, K.; Nakamasu, Y.; Matsuoka, Y.; Oda, M.; Miyamoto, S.; Sonoda, S.; Tomono, D.; Miuchi, K.; Kurosawa, S.; Sawano, T.
2016-07-01
The field of MeV gamma-ray astronomy has not opened up until recently owing to imaging difficulties. Compton telescopes and coded-aperture imaging cameras are used as conventional MeV gamma-ray telescopes; however their observations are obstructed by huge background, leading to uncertainty of the point spread function (PSF). Conventional MeV gamma-ray telescopes imaging utilize optimizing algorithms such as the ML-EM method, making it difficult to define the correct PSF, which is the uncertainty of a gamma-ray image on the celestial sphere. Recently, we have defined and evaluated the PSF of an electron-tracking Compton camera (ETCC) and a conventional Compton telescope, and thereby obtained an important result: The PSF strongly depends on the precision of the recoil direction of electron (scatter plane deviation, SPD) and is not equal to the angular resolution measure (ARM). Now, we are constructing a 30 cm-cubic ETCC for a second balloon experiment, Sub-MeV gamma ray Imaging Loaded-on-balloon Experiment: SMILE-II. The current ETCC has an effective area of 1 cm2 at 300 keV, a PSF of 10° at FWHM for 662 keV, and a large field of view of 3 sr. We will upgrade this ETCC to have an effective area of several cm2 and a PSF of 5° using a CF4-based gas. Using the upgraded ETCC, our observation plan for SMILE-II is to map of the electron-positron annihilation line and the 1.8 MeV line from 26Al. In this paper, we will report on the current performance of the ETCC and on our observation plan.
NASA Astrophysics Data System (ADS)
Lynam, Jeff R.
2001-09-01
A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, A.; Brezinsek, S.; Mertens, Ph.
2012-10-15
A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II,more » C I, C II, C III) with high optical transmittance ({>=}30% in the designed wavelength range) as well as high spatial resolution that is {<=}2 mm at the object plane and {<=}3 mm for the full depth of field ({+-}0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the {lambda} > 0.95 {mu}m range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.« less
Monitoring the spatial and temporal evolution of slope instability with Digital Image Correlation
NASA Astrophysics Data System (ADS)
Manconi, Andrea; Glueer, Franziska; Loew, Simon
2017-04-01
The identification and monitoring of ground deformation is important for an appropriate analysis and interpretation of unstable slopes. Displacements are usually monitored with in-situ techniques (e.g., extensometers, inclinometers, geodetic leveling, tachymeters and D-GPS), and/or active remote sensing methods (e.g., LiDAR and radar interferometry). In particular situations, however, the choice of the appropriate monitoring system is constrained by site-specific conditions. Slope areas can be very remote and/or affected by rapid surface changes, thus hardly accessible, often unsafe, for field installations. In many cases the use of remote sensing approaches might be also hindered because of unsuitable acquisition geometries, poor spatial resolution and revisit times, and/or high costs. The increasing availability of digital imagery acquired from terrestrial photo and video cameras allows us nowadays for an additional source of data. The latter can be exploited to visually identify changes of the scene occurring over time, but also to quantify the evolution of surface displacements. Image processing analyses, such as Digital Image Correlation (known also as pixel-offset or feature-tracking), have demonstrated to provide a suitable alternative to detect and monitor surface deformation at high spatial and temporal resolutions. However, a number of intrinsic limitations have to be considered when dealing with optical imagery acquisition and processing, including the effects of light conditions, shadowing, and/or meteorological variables. Here we propose an algorithm to automatically select and process images acquired from time-lapse cameras. We aim at maximizing the results obtainable from large datasets of digital images acquired with different light and meteorological conditions, and at retrieving accurate information on the evolution of surface deformation. We show a successful example of application of our approach in the Swiss Alps, more specifically in the Great Aletsch area, where slope instability was recently reactivated due to the progressive glacier retreat. At this location, time-lapse cameras have been installed during the last two years, ranging from low-cost and low-resolution webcams to more expensive high-resolution reflex cameras. Our results confirm that time-lapse cameras provide quantitative and accurate measurements of surface deformation evolution over space and time, especially in situations when other monitoring instruments fail.
Objective evaluation of slanted edge charts
NASA Astrophysics Data System (ADS)
Hornung, Harvey (.
2015-01-01
Camera objective characterization methodologies are widely used in the digital camera industry. Most objective characterization systems rely on a chart with specific patterns, a software algorithm measures a degradation or difference between the captured image and the chart itself. The Spatial Frequency Response (SFR) method, which is part of the ISO 122331 standard, is now very commonly used in the imaging industry, it is a very convenient way to measure a camera Modulation transfer function (MTF). The SFR algorithm can measure frequencies beyond the Nyquist frequency thanks to super-resolution, so it does provide useful information on aliasing and can provide modulation for frequencies between half Nyquist and Nyquist on all color channels of a color sensor with a Bayer pattern. The measurement process relies on a chart that is simple to manufacture: a straight transition from a bright reflectance to a dark one (black and white for instance), while a sine chart requires handling precisely shades of gray which can also create all sort of issues with printers that rely on half-toning. However, no technology can create a perfect edge, so it is important to assess the quality of the chart and understand how it affects the accuracy of the measurement. In this article, I describe a protocol to characterize the MTF of a slanted edge chart, using a high-resolution flatbed scanner. The main idea is to use the RAW output of the scanner as a high-resolution micro-densitometer, since the signal is linear it is suitable to measure the chart MTF using the SFR algorithm. The scanner needs to be calibrated in sharpness: the scanner MTF is measured with a calibrated sine chart and inverted to compensate for the modulation loss from the scanner. Then the true chart MTF is computed. This article compares measured MTF from commercial charts and charts printed on printers, and also compares how of the contrast of the edge (using different shades of gray) can affect the chart MTF, then concludes on what distance range and camera resolution the chart can reliably measure the camera MTF.
An image compression algorithm for a high-resolution digital still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.
Hubble Space Telescope photographed by Electronic Still Camera
1993-12-04
S61-E-008 (4 Dec 1993) --- This view of the Earth-orbiting Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view was taken during rendezvous operations. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Electronic Still Camera image of Astronaut Claude Nicollier working with RMS
1993-12-05
S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Deep Near-Infrared Surveys and Young Brown Dwarf Populations in Star-Forming Regions
NASA Astrophysics Data System (ADS)
Tamura, M.; Naoi, T.; Oasa, Y.; Nakajima, Y.; Nagashima, C.; Nagayama, T.; Baba, D.; Nagata, T.; Sato, S.; Kato, D.; Kurita, M.; Sugitani, K.; Itoh, Y.; Nakaya, H.; Pickles, A.
2003-06-01
We are currently conducting three kinds of IR surveys of star forming regions (SFRs) in order to seek for very low-mass young stellar populations. First is a deep JHKs-bands (simultaneous) survey with the SIRIUS camera on the IRSF 1.4m or the UH 2.2m telescopes. Second is a very deep JHKs survey with the CISCO IR camera on the Subaru 8.2m telescope. Third is a high resolution companion search around nearby YSOs with the CIAO adaptive optics coronagraph IR camera on the Subaru. In this contribution, we describe our SIRIUS camera and present preliminary results of the ongoing surveys with this new instrument.
Exploring the Universe with the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
1990-01-01
A general overview is given of the operations, engineering challenges, and components of the Hubble Space Telescope. Deployment, checkout and servicing in space are discussed. The optical telescope assembly, focal plane scientific instruments, wide field/planetary camera, faint object spectrograph, faint object camera, Goddard high resolution spectrograph, high speed photometer, fine guidance sensors, second generation technology, and support systems and services are reviewed.
Procurement specification color graphic camera system
NASA Technical Reports Server (NTRS)
Prow, G. E.
1980-01-01
The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
Plume propagation direction determination with SO2 cameras
NASA Astrophysics Data System (ADS)
Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich
2017-03-01
SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.
Computational photography with plenoptic camera and light field capture: tutorial.
Lam, Edmund Y
2015-11-01
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
NASA Technical Reports Server (NTRS)
1980-01-01
Components for an orbiting camera payload system (OCPS) include the large format camera (LFC), a gas supply assembly, and ground test, handling, and calibration hardware. The LFC, a high resolution large format photogrammetric camera for use in the cargo bay of the space transport system, is also adaptable to use on an RB-57 aircraft or on a free flyer satellite. Carrying 4000 feet of film, the LFC is usable over the visible to near IR, at V/h rates of from 11 to 41 milliradians per second, overlap of 10, 60, 70 or 80 percent and exposure times of from 4 to 32 milliseconds. With a 12 inch focal length it produces a 9 by 18 inch format (long dimension in line of flight) with full format low contrast resolution of 88 lines per millimeter (AWAR), full format distortion of less than 14 microns and a complement of 45 Reseau marks and 12 fiducial marks. Weight of the OCPS as supplied, fully loaded is 944 pounds and power dissipation is 273 watts average when in operation, 95 watts in standby. The LFC contains an internal exposure sensor, or will respond to external command. It is able to photograph starfields for inflight calibration upon command.
A single pixel camera video ophthalmoscope
NASA Astrophysics Data System (ADS)
Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.
2017-02-01
There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.
Design Study of the Absorber Detector of a Compton Camera for On-Line Control in Ion Beam Therapy
NASA Astrophysics Data System (ADS)
Richard, M.-H.; Dahoumane, M.; Dauvergne, D.; De Rydt, M.; Dedes, G.; Freud, N.; Krimmer, J.; Letang, J. M.; Lojacono, X.; Maxim, V.; Montarou, G.; Ray, C.; Roellinghoff, F.; Testa, E.; Walenta, A. H.
2012-10-01
The goal of this study is to tune the design of the absorber detector of a Compton camera for prompt γ-ray imaging during ion beam therapy. The response of the Compton camera to a photon point source with a realistic energy spectrum (corresponding to the prompt γ-ray spectrum emitted during the carbon irradiation of a water phantom) is studied by means of Geant4 simulations. Our Compton camera consists of a stack of 2 mm thick silicon strip detectors as a scatter detector and of a scintillator plate as an absorber detector. Four scintillators are considered: LYSO, NaI, LaBr3 and BGO. LYSO and BGO appear as the most suitable materials, due to their high photo-electric cross-sections, which leads to a high percentage of fully absorbed photons. Depth-of-interaction measurements are shown to have limited influence on the spatial resolution of the camera. In our case, the thickness which gives the best compromise between a high percentage of photons that are fully absorbed and a low parallax error is about 4 cm for the LYSO detector and 4.5 cm for the BGO detector. The influence of the width of the absorber detector on the spatial resolution is not very pronounced as long as it is lower than 30 cm.
Characterization results from several commercial soft X-ray streak cameras
NASA Astrophysics Data System (ADS)
Stradling, G. L.; Studebaker, J. K.; Cavailler, C.; Launspach, J.; Planes, J.
The spatio-temporal performance of four soft X-ray streak cameras has been characterized. The objective in evaluating the performance capability of these instruments is to enable us to optimize experiment designs, to encourage quantitative analysis of streak data and to educate the ultra high speed photography and photonics community about the X-ray detector performance which is available. These measurements have been made collaboratively over the space of two years at the Forge pulsed X-ray source at Los Alamos and at the Ketjak laser facility an CEA Limeil-Valenton. The X-ray pulse lengths used for these measurements at these facilities were 150 psec and 50 psec respectively. The results are presented as dynamically-measured modulation transfer functions. Limiting temporal resolution values were also calculated. Emphasis is placed upon shot noise statistical limitations in the analysis of the data. Space charge repulsion in the streak tube limits the peak flux at ultra short experiments duration times. This limit results in a reduction of total signal and a decrease in signal to no ise ratio in the streak image. The four cameras perform well with 20 1p/mm resolution discernable in data from the French C650X, the Hadland X-Chron 540 and the Hamamatsu C1936X streak cameras. The Kentech X-ray streak camera has lower modulation and does not resolve below 10 1p/mm but has a longer photocathode.
Coregistration of high-resolution Mars orbital images
NASA Astrophysics Data System (ADS)
Sidiropoulos, Panagiotis; Muller, Jan-Peter
2015-04-01
The systematic orbital imaging of the Martian surface started 4 decades ago from NASA's Viking Orbiter 1 & 2 missions, which were launched in August 1975, and acquired orbital images of the planet between 1976 and 1980. The result of this reconnaissance was the first medium-resolution (i.e. ≤ 300m/pixel) global map of Mars, as well as a variety of high-resolution images (reaching up to 8m/pixel) of special regions of interest. Over the last two decades NASA has sent 3 more spacecraft with onboard instruments for high-resolution orbital imaging: Mars Global Surveyor (MGS) having onboard the Mars Orbital Camera - Narrow Angle (MOC-NA), Mars Odyssey having onboard the Thermal Emission Imaging System - Visual (THEMIS-VIS) and the Mars Reconnaissance Orbiter (MRO) having on board two distinct high-resolution cameras, Context Camera (CTX) and High-Resolution Imaging Science Experiment (HiRISE). Moreover, ESA has the multispectral High resolution Stereo Camera (HRSC) onboard ESA's Mars Express with resolution up to 12.5m since 2004. Overall, this set of cameras have acquired more than 400,000 high-resolution images, i.e. with resolution better than 100m and as fine as 25 cm/pixel. Notwithstanding the high spatial resolution of the available NASA orbital products, their accuracy of areo-referencing is often very poor. As a matter of fact, due to pointing inconsistencies, usually form errors in roll attitude, the acquired products may actually image areas tens of kilometers far away from the point that they are supposed to be looking at. On the other hand, since 2004, the ESA Mars Express has been acquiring stereo images through the High Resolution Stereo Camera (HRSC), with resolution that is usually 12.5-25 metres per pixel. The achieved coverage is more than 64% for images with resolution finer than 20 m/pixel, while for ~40% of Mars, Digital Terrain Models (DTMs) have been produced with are co-registered with MOLA [Gwinner et al., 2010]. The HRSC images and DTMs represent the best available 3D reference frame for Mars showing co-registration with MOLA<25m (loc.cit.). In our work, the reference generated by HRSC terrain corrected orthorectified images is used as a common reference frame to co-register all available high-resolution orbital NASA products into a common 3D coordinate system, thus allowing the examination of the changes that happen on the surface of Mars over time (such as seasonal flows [McEwen et al., 2011] or new impact craters [Byrne, et al., 2009]). In order to accomplish such a tedious manual task, we have developed an automatic co-registration pipeline that produces orthorectified versions of the NASA images in realistic time (i.e. from ~15 minutes to 10 hours per image depending on size). In the first step of this pipeline, tie-points are extracted from the target NASA image and the reference HRSC image or image mosaic. Subsequently, the HRSC areo-reference information is used to transform the HRSC tie-points pixel coordinates into 3D "world" coordinates. This way, a correspondence between the pixel coordinates of the target NASA image and the 3D "world" coordinates is established for each tie-point. This set of correspondences is used to estimate a non-rigid, 3D to 2D transformation model, which transforms the target image into the HRSC reference coordinate system. Finally, correlation of the transformed target image and the HRSC image is employed to fine-tune the orthorectification results, thus generating results with sub-pixel accuracy. This method, which has been proven to be accurate, robust to resolution differences and reliable when dealing with partially degraded data and fast, will be presented, along with some example co-registration results that have been achieved by using it. Acknowledgements: The research leading to these results has received partial funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n° 607379. References: [1] K. F. Gwinner, et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007. [2] A. McEwen, et al. (2011) Seasonal flows on warm martian slopes. Science , 333 (6043): 740-743. [3] S. Byrne, et al. (2009) Distribution of mid-latitude ground ice on mars from new impact craters. Science, 325(5948):1674-1676.
Camera calibration for multidirectional flame chemiluminescence tomography
NASA Astrophysics Data System (ADS)
Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun
2017-04-01
Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.
Towards next generation 3D cameras
NASA Astrophysics Data System (ADS)
Gupta, Mohit
2017-03-01
We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.
Determining the Amount of Copper(II) Ions in a Solution Using a Smartphone
ERIC Educational Resources Information Center
Montangero, Marc
2015-01-01
When dissolving copper in nitric acid, copper(II) ions produce a blue-colored solution. It is possible to determine the concentration of copper(II) ions, focusing on the hue of the color, using a smartphone camera. A free app can be used to measure the hue of the solution, and with the help of standard copper(II) solutions, one can graph a…
Blue camera of the Keck cosmic web imager, fabrication and testing
NASA Astrophysics Data System (ADS)
Rockosi, Constance; Cowley, David; Cabak, Jerry; Hilyard, David; Pfister, Terry
2016-08-01
The Keck Cosmic Web Imager (KCWI) is a new facility instrument being developed for the W. M. Keck Observatory and funded for construction by the Telescope System Instrumentation Program (TSIP) of the National Science Foundation (NSF). KCWI is a bench-mounted spectrograph for the Keck II right Nasmyth focal station, providing integral field spectroscopy over a seeing-limited field up to 20" x 33" in extent. Selectable Volume Phase Holographic (VPH) gratings provide high efficiency and spectral resolution in the range of 1000 to 20000. The dual-beam design of KCWI passed a Preliminary Design Review in summer 2011. The detailed design of the KCWI blue channel (350 to 700 nm) is now nearly complete, with the red channel (530 to 1050 nm) planned for a phased implementation contingent upon additional funding. KCWI builds on the experience of the Caltech team in implementing the Cosmic Web Imager (CWI), in operation since 2009 at Palomar Observatory. KCWI adds considerable flexibility to the CWI design, and will take full advantage of the excellent seeing and dark sky above Mauna Kea with a selectable nod-and-shuffle observing mode. In this paper, models of the expected KCWI sensitivity and background subtraction capability are presented, along with a detailed description of the instrument design. The KCWI team is lead by Caltech (project management, design and implementation) in partnership with the University of California at Santa Cruz (camera optical and mechanical design) and the W. M. Keck Observatory (program oversight and observatory interfaces). The optical design of the blue camera for the Keck Cosmic Web Imager (KCWI) by Harland Epps of the University of California, Santa Cruz is a lens assembly consisting of eight spherical optical elements. Half the elements are calcium fluoride and all elements are air spaced. The design of the camera barrel is unique in that all the optics are secured in their respective cells with an RTV annulus without additional hardware such as retaining rings. The optical design and the robust lens mounting concept has allowed UCO/Lick to design a straightforward lens camera assembly. However, alignment sensitivity is a strict 15 μm for most elements. This drives the fabrication, assembly, and performance of the camera barrel.
Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic
2017-03-01
Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.
Re-scan confocal microscopy: scanning twice for better resolution
De Luca, Giulia M.R.; Breedijk, Ronald M.P.; Brandt, Rick A.J.; Zeelenberg, Christiaan H.C.; de Jong, Babette E.; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A.; Stallinga, Sjoerd; Manders, Erik M.M.
2013-01-01
We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required. PMID:24298422
NASA Astrophysics Data System (ADS)
Wojciechowski, Adam M.; Karadas, Mürsel; Huck, Alexander; Osterkamp, Christian; Jankuhn, Steffen; Meijer, Jan; Jelezko, Fedor; Andersen, Ulrik L.
2018-03-01
Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small (≪10-2), fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic field sensitivity resulting from the limited number of photoelectrons that a camera can record in a given time. Several types of camera sensors are analyzed, and the smallest measurable magnetic field change is estimated for each type. We show that most common sensors are of a limited use in such applications, while certain highly specific cameras allow achieving nanotesla-level sensitivity in 1 s of a combined exposure. Finally, we demonstrate the results obtained with a lock-in camera that paves the way for real-time, wide-field magnetometry at the nanotesla level and with a micrometer resolution.
NASA Astrophysics Data System (ADS)
Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott
2003-09-01
A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.
Brain single-photon emission CT physics principles.
Accorsi, R
2008-08-01
The basic principles of scintigraphy are reviewed and extended to 3D imaging. Single-photon emission computed tomography (SPECT) is a sensitive and specific 3D technique to monitor in vivo functional processes in both clinical and preclinical studies. SPECT/CT systems are becoming increasingly common and can provide accurately registered anatomic information as well. In general, SPECT is affected by low photon-collection efficiency, but in brain imaging, not all of the large FOV of clinical gamma cameras is needed: The use of fan- and cone-beam collimation trades off the unused FOV for increased sensitivity and resolution. The design of dedicated cameras aims at increased angular coverage and resolution by minimizing the distance from the patient. The corrections needed for quantitative imaging are challenging but can take advantage of the relative spatial uniformity of attenuation and scatter. Preclinical systems can provide submillimeter resolution in small animal brain imaging with workable sensitivity.
Optical digital microscopy for cyto- and hematological studies in vitro
NASA Astrophysics Data System (ADS)
Ganilova, Yu. A.; Dolmashkin, A. A.; Doubrovski, V. A.; Yanina, I. Yu.; Tuchin, V. V.
2013-08-01
The dependence of the spatial resolution and field of view of an optical microscope equipped with a CCD camera on the objective magnification has been experimentally investigated. Measurement of these characteristics has shown that a spatial resolution of 20-25 px/μm at a field of view of about 110 μm is quite realistic; this resolution is acceptable for a detailed study of the processes occurring in cell. It is proposed to expand the dynamic range of digital camera by measuring and approximating its light characteristics with subsequent plotting of the corresponding calibration curve. The biological objects of study were human adipose tissue cells, as well as erythrocytes and their immune complexes in human blood; both objects have been investigated in vitro. Application of optical digital microscopy for solving specific problems of cytology and hematology can be useful in both biomedical studies in experiments with objects of nonbiological origin.
NASA Astrophysics Data System (ADS)
Li, Hao; Liu, Wenzhong; Zhang, Hao F.
2015-10-01
Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.
Hubble Space Telescope faint object camera instrument handbook (Post-COSTAR), version 5.0
NASA Technical Reports Server (NTRS)
Nota, A. (Editor); Jedrzejewski, R. (Editor); Greenfield, P. (Editor); Hack, W. (Editor)
1994-01-01
The faint object camera (FOC) is a long-focal-ratio, photon-counting device capable of taking high-resolution two-dimensional images of the sky up to 14 by 14 arc seconds squared in size with pixel dimensions as small as 0.014 by 0.014 arc seconds squared in the 1150 to 6500 A wavelength range. Its performance approaches that of an ideal imaging system at low light levels. The FOC is the only instrument on board the Hubble Space Telescope (HST) to fully use the spatial resolution capabilities of the optical telescope assembly (OTA) and is one of the European Space Agency's contributions to the HST program.
Surface compositional variation on the comet 67P/Churyumov-Gerasimenko by OSIRIS data
NASA Astrophysics Data System (ADS)
Barucci, M. A.; Fornasier, S.; Feller, C.; Perna, D.; Hasselmann, H.; Deshapriya, J. D. P.; Fulchignoni, M.; Besse, S.; Sierks, H.; Forgia, F.; Lazzarin, M.; Pommerol, A.; Oklay, N.; Lara, L.; Scholten, F.; Preusker, F.; Leyrat, C.; Pajola, M.; Osiris-Rosetta Team
2015-10-01
Since the Rosetta mission arrived at the comet 67P/Churyumov-Gerasimenko (67/P C-G) on July 2014, the comet nucleus has been mapped by both OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System, [1]) NAC (Narrow Angle Camera) and WAC (Wide Angle Camera) acquiring a huge quantity of surface's images at different wavelength bands, under variable illumination conditions and spatial resolution, and producing the most detailed maps at the highest spatial resolution of a comet nucleus surface.67/P C-G's nucleus shows an irregular bi-lobed shape of complex morphology with terrains showing intricate features [2, 3] and a heterogeneity surface at different scales.
The core of the nearby S0 galaxy NGC 7457 imaged with the HST planetary camera
NASA Technical Reports Server (NTRS)
Lauer, Tod R.; Faber, S. M.; Holtzman, Jon A.; Baum, William A.; Currie, Douglas G.; Ewald, S. P.; Groth, Edward J.; Hester, J. Jeff; Kelsall, T.
1991-01-01
A brief analysis is presented of images of the nearby S0 galaxy NGC 7457 obtained with the HST Planetary Camera. While the galaxy remains unresolved with the HST, the images reveal that any core most likely has r(c) less than 0.052 arcsec. The light distribution is consistent with a gamma = -1.0 power law inward to the resolution limit, with a possible stellar nucleus with luminosity of 10 million solar. This result represents the first observation outside the Local Group of a galaxy nucleus at this spatial resolution, and it suggests that such small, high surface brightness cores may be common.
Comparison of mosaicking techniques for airborne images from consumer-grade cameras
USDA-ARS?s Scientific Manuscript database
Images captured from airborne imaging systems have the advantages of relatively low cost, high spatial resolution, and real/near-real-time availability. Multiple images taken from one or more flight lines could be used to generate a high-resolution mosaic image, which could be useful for diverse rem...
USDA-ARS?s Scientific Manuscript database
Ultra high resolution digital aerial photography has great potential to complement or replace ground measurements of vegetation cover for rangeland monitoring and assessment. We investigated object-based image analysis (OBIA) techniques for classifying vegetation in southwestern U.S. arid rangelands...
USDA-ARS?s Scientific Manuscript database
Significant advancements in photogrammetric Structure-from-Motion (SfM) software, coupled with improvements in the quality and resolution of smartphone cameras, has made it possible to create ultra-fine resolution three-dimensional models of physical objects using an ordinary smartphone. Here we pre...
Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path
Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki
2017-01-01
Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622
Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board
Park, Yoonsu; Yun, Seokmin; Won, Chee Sun; Cho, Kyungeun; Um, Kyhyun; Sim, Sungdae
2014-01-01
Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results. PMID:24643005
2013-01-15
S48-E-007 (12 Sept 1991) --- Astronaut James F. Buchli, mission specialist, catches snack crackers as they float in the weightless environment of the earth-orbiting Discovery. This image was transmitted by the Electronic Still Camera, Development Test Objective (DTO) 648. The ESC is making its initial appearance on a Space Shuttle flight. Electronic still photography is a new technology that enables a camera to electronically capture and digitize an image with resolution approaching film quality. The digital image is stored on removable hard disks or small optical disks, and can be converted to a format suitable for downlink transmission or enhanced using image processing software. The Electronic Still Camera (ESC) was developed by the Man- Systems Division at the Johnson Space Center and is the first model in a planned evolutionary development leading to a family of high-resolution digital imaging devices. H. Don Yeates, JSC's Man-Systems Division, is program manager for the ESC. THIS IS A SECOND GENERATION PRINT MADE FROM AN ELECTRONICALLY PRODUCED NEGATIVE
NASA Astrophysics Data System (ADS)
Trojanova, E.; Jakubek, J.; Turecek, D.; Sykora, V.; Francova, P.; Kolarova, V.; Sefc, L.
2018-01-01
The imaging method of SPECT (Single Photon Emission Computed Tomography) is used in nuclear medicine for diagnostics of various diseases or organs malfunctions. The distribution of medically injected, inhaled, or ingested radionuclides (radiotracers) in the patient body is imaged using gamma-ray sensitive camera with suitable imaging collimator. The 3D image is then calculated by combining many images taken from different observation angles. Most of SPECT systems use scintillator based cameras. These cameras do not provide good energy resolution and do not allow efficient suppression of unwanted signals such as those caused by Compton scattering. The main goal of this work is evaluation of Timepix3 detector properties for SPECT method for functional imaging of small animals during preclinical studies. Advantageous Timepix3 properties such as energy and spatial resolution are exploited for significant image quality improvement. Preliminary measurements were performed on specially prepared plastic phantom with cavities filled by radioisotopes and then repeated with in vivo mouse sample.
High resolution imaging of the Venus night side using a Rockwell 128x128 HgCdTe array
NASA Technical Reports Server (NTRS)
Hodapp, K.-W.; Sinton, W.; Ragent, B.; Allen, D.
1989-01-01
The University of Hawaii operates an infrared camera with a 128x128 HgCdTe detector array on loan from JPL's High Resolution Imaging Spectrometer (HIRIS) project. The characteristics of this camera system are discussed. The infrared camera was used to obtain images of the night side of Venus prior to and after inferior conjunction in 1988. The images confirm Allen and Crawford's (1984) discovery of bright features on the dark hemisphere of Venus visible in the H and K bands. Our images of these features are the best obtained to date. Researchers derive a pseudo rotation period of 6.5 days for these features and 1.74 microns brightness temperatures between 425 K and 480 K. The features are produced by nonuniform absorption in the middle cloud layer (47 to 57 Km altitude) of thermal radiation from the lower Venus atmosphere (20 to 30 Km altitude). A more detailed analysis of the data is in progress.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Design and development of an airborne multispectral imaging system
NASA Astrophysics Data System (ADS)
Kulkarni, Rahul R.; Bachnak, Rafic; Lyle, Stacey; Steidley, Carl W.
2002-08-01
Advances in imaging technology and sensors have made airborne remote sensing systems viable for many applications that require reasonably good resolution at low cost. Digital cameras are making their mark on the market by providing high resolution at very high rates. This paper describes an aircraft-mounted imaging system (AMIS) that is being designed and developed at Texas A&M University-Corpus Christi (A&M-CC) with the support of a grant from NASA. The approach is to first develop and test a one-camera system that will be upgraded into a five-camera system that offers multi-spectral capabilities. AMIS will be low cost, rugged, portable and has its own battery power source. Its immediate use will be to acquire images of the Coastal area in the Gulf of Mexico for a variety of studies covering vast spectra from near ultraviolet region to near infrared region. This paper describes AMIS and its characteristics, discusses the process for selecting the major components, and presents the progress.
Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera
Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing
2018-01-01
The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885
Elemental mapping and microimaging by x-ray capillary optics.
Hampai, D; Dabagov, S B; Cappuccio, G; Longoni, A; Frizzi, T; Cibin, G; Guglielmotti, V; Sala, M
2008-12-01
Recently, many experiments have highlighted the advantage of using polycapillary optics for x-ray fluorescence studies. We have developed a special confocal scheme for micro x-ray fluorescence measurements that enables us to obtain not only elemental mapping of the sample but also simultaneously its own x-ray imaging. We have designed the prototype of a compact x-ray spectrometer characterized by a spatial resolution of less than 100 microm for fluorescence and less than 10 microm for imaging. A couple of polycapillary lenses in a confocal configuration together with a silicon drift detector allow elemental studies of extended samples (approximately 3 mm) to be performed, while a CCD camera makes it possible to record an image of the same samples with 6 microm spatial resolution, which is limited only by the pixel size of the camera. By inserting a compound refractive lens between the sample and the CCD camera, we hope to develop an x-ray microscope for more enlarged images of the samples under test.
Near-infrared fluorescence imaging with a mobile phone (Conference Presentation)
NASA Astrophysics Data System (ADS)
Ghassemi, Pejhman; Wang, Bohan; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, T. Joshua
2017-03-01
Mobile phone cameras employ sensors with near-infrared (NIR) sensitivity, yet this capability has not been exploited for biomedical purposes. Removing the IR-blocking filter from a phone-based camera opens the door to a wide range of techniques and applications for inexpensive, point-of-care biophotonic imaging and sensing. This study provides proof of principle for one of these modalities - phone-based NIR fluorescence imaging. An imaging system was assembled using a 780 nm light source along with excitation and emission filters with 800 nm and 825 nm cut-off wavelengths, respectively. Indocyanine green (ICG) was used as an NIR fluorescence contrast agent in an ex vivo rodent model, a resolution test target and a 3D-printed, tissue-simulating vascular phantom. Raw and processed images for red, green and blue pixel channels were analyzed for quantitative evaluation of fundamental performance characteristics including spectral sensitivity, detection linearity and spatial resolution. Mobile phone results were compared with a scientific CCD. The spatial resolution of CCD system was consistently superior to the phone, and green phone camera pixels showed better resolution than blue or green channels. The CCD exhibited similar sensitivity as processed red and blue pixels channels, yet a greater degree of detection linearity. Raw phone pixel data showed lower sensitivity but greater linearity than processed data. Overall, both qualitative and quantitative results provided strong evidence of the potential of phone-based NIR imaging, which may lead to a wide range of applications from cancer detection to glucose sensing.
Circuit design of an EMCCD camera
NASA Astrophysics Data System (ADS)
Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun
2012-07-01
EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng Hansheng
The ICF Program in China has made significant progress with multilabs' efforts in the past years. The eight-beam SG-II laser facility, upgraded from the two-beam SG-I facility, is nearly completed for 1.05 {mu}m light output and is about to be operated for experiments. Some benchmark experiments have been conducted for disk targets. Advanced diagnostic techniques, such as an x-ray microscope with a 7-{mu}m spatial resolution and x-ray framing cameras with a temporal resolution better than 65ps, have been developed. Lower energy pumping with prepulse technique for Ne-like Ti laser at 32.6nm has succeeded and shadowgraphy of a fine mesh hasmore » been demonstrated with the Ti laser beam. A national project, SG-III laser facility, has been proposed to produce 60 kJ blue light for target physics experiments and is being conceptually designed. New laser technology, including maltipass amplification, large aperture plasma electrode switches and laser glass with fewer platinum grains have been developed to meet the requirements of the SG-III Project. The Technical Integration Line (TIL) as a scientific prototype beamlet of SG-III will be first built in the next few years.« less
Stereoscopic Configurations To Minimize Distortions
NASA Technical Reports Server (NTRS)
Diner, Daniel B.
1991-01-01
Proposed television system provides two stereoscopic displays. Two-camera, two-monitor system used in various camera configurations and with stereoscopic images on monitors magnified to various degrees. Designed to satisfy observer's need to perceive spatial relationships accurately throughout workspace or to perceive them at high resolution in small region of workspace. Potential applications include industrial, medical, and entertainment imaging and monitoring and control of telemanipulators, telerobots, and remotely piloted vehicles.
Observation of Planetary Motion Using a Digital Camera
ERIC Educational Resources Information Center
Meyn, Jan-Peter
2008-01-01
A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…
The SALSA Project - High-End Aerial 3d Camera
NASA Astrophysics Data System (ADS)
Rüther-Kindel, W.; Brauchle, J.
2013-08-01
The ATISS measurement drone, developed at the University of Applied Sciences Wildau, is an electrical powered motor glider with a maximum take-off weight of 25 kg including a payload capacity of 10 kg. Two 2.5 kW engines enable ultra short take-off procedures and the motor glider design results in a 1 h endurance. The concept of ATISS is based on the idea to strictly separate between aircraft and payload functions, which makes ATISS a very flexible research platform for miscellaneous payloads. ATISS is equipped with an autopilot for autonomous flight patterns but under permanent pilot control from the ground. On the basis of ATISS the project SALSA was undertaken. The aim was to integrate a system for digital terrain modelling. Instead of a laser scanner a new design concept was chosen based on two synchronized high resolution digital cameras, one in a fixed nadir orientation and the other in a oblique orientation. Thus from every object on the ground images from different view angles are taken. This new measurement camera system MACS-TumbleCam was developed at the German Aerospace Center DLR Berlin-Adlershof especially for the ATISS payload concept. Special advantage in comparison to laser scanning is the fact, that instead of a cloud of points a surface including texture is generated and a high-end inertial orientation system can be omitted. The first test flights show a ground resolution of 2 cm and height resolution of 3 cm, which underline the extraordinary capabilities of ATISS and the MACS measurement camera system.
Diving-Flight Aerodynamics of a Peregrine Falcon (Falco peregrinus)
Ponitz, Benjamin; Schmitz, Anke; Fischer, Dominik; Bleckmann, Horst; Brücker, Christoph
2014-01-01
This study investigates the aerodynamics of the falcon Falco peregrinus while diving. During a dive peregrines can reach velocities of more than 320 km h−1. Unfortunately, in freely roaming falcons, these high velocities prohibit a precise determination of flight parameters such as velocity and acceleration as well as body shape and wing contour. Therefore, individual F. peregrinus were trained to dive in front of a vertical dam with a height of 60 m. The presence of a well-defined background allowed us to reconstruct the flight path and the body shape of the falcon during certain flight phases. Flight trajectories were obtained with a stereo high-speed camera system. In addition, body images of the falcon were taken from two perspectives with a high-resolution digital camera. The dam allowed us to match the high-resolution images obtained from the digital camera with the corresponding images taken with the high-speed cameras. Using these data we built a life-size model of F. peregrinus and used it to measure the drag and lift forces in a wind-tunnel. We compared these forces acting on the model with the data obtained from the 3-D flight path trajectory of the diving F. peregrinus. Visualizations of the flow in the wind-tunnel uncovered details of the flow structure around the falcon’s body, which suggests local regions with separation of flow. High-resolution pictures of the diving peregrine indicate that feathers pop-up in the equivalent regions, where flow separation in the model falcon occurred. PMID:24505258
NASA Astrophysics Data System (ADS)
Mugrauer, M.
2016-03-01
The Cassegrain-Teleskop-Kamera (CTK-II) and the Refraktor-Teleskop-Kamera (RTK) are two CCD-imagers which are operated at the 25 cm Cassegrain and 20 cm refractor auxiliary telescopes of the University Observatory Jena. This article describes the main characteristics of these instruments. The properties of the CCD-detectors, the astrometry, the image quality, and the detection limits of both CCD-cameras, as well as some results of ongoing observing projects, carried out with these instruments, are presented. Based on observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
Supporting lander and rover operation: a novel super-resolution restoration technique
NASA Astrophysics Data System (ADS)
Tao, Yu; Muller, Jan-Peter
2015-04-01
Higher resolution imaging data is always desirable to critical rover engineering operations, such as landing site selection, path planning, and optical localisation. For current Mars missions, 25cm HiRISE images have been widely used by the MER & MSL engineering team for rover path planning and location registration/adjustment. However, 25cm is not high enough resolution to be able to view individual rocks (≤2m in size) or visualise the types of sedimentary features that rover onboard cameras might observe. Nevertheless, due to various physical constraints (e.g. telescope size and mass) from the imaging instruments themselves, one needs to be able to tradeoff spatial resolution and bandwidth. This means that future imaging systems are likely to be limited to resolve features larger than 25cm. We have developed a novel super-resolution algorithm/pipeline to be able to restore higher resolution image from the non-redundant sub-pixel information contained in multiple lower resolution raw images [Tao & Muller 2015]. We will demonstrate with experiments performed using 5-10 overlapped 25cm HiRISE images for MER-A, MER-B & MSL to resolve 5-10cm super resolution images that can be directly compared to rover imagery at a range of 5 metres from the rover cameras but in our case can be used to visualise features many kilometres away from the actual rover traverse. We will demonstrate how these super-resolution images together with image understanding software can be used to quantify rock size-frequency distributions as well as measure sedimentary rock layers for several critical sites for comparison with rover orthorectified image mosaic to demonstrate optimality of using our super-resolution resolved image to better support future lander and rover operation in future. We present the potential of super-resolution for virtual exploration to the ˜400 HiRISE areas which have been viewed 5 or more times and the potential application of this technique to all of the ESA ExoMars Trace Gas orbiter CaSSiS stereo, multi-angle and colour camera images from 2017 onwards. Acknowledgements: The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No.312377 PRoViDE.
Advanced High-Definition Video Cameras
NASA Technical Reports Server (NTRS)
Glenn, William
2007-01-01
A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.