Science.gov

Sample records for camera lroc images

  1. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  2. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2015-09-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600-2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  3. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  4. Initial Results of 3D Topographic Mapping Using Lunar Reconnaissance Orbiter Camera (LROC) Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Li, R.; Oberst, J.; McEwen, A. S.; Archinal, B. A.; Beyer, R. A.; Thomas, P. C.; Chen, Y.; Hwangbo, J.; Lawver, J. D.; Scholten, F.; Mattson, S. S.; Howington-Kraus, A. E.; Robinson, M. S.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO), launched June 18, 2009, carries the Lunar Reconnaissance Orbiter Camera (LROC) as one of seven remote sensing instruments on board. The camera system is equipped with a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NAC) for systematic lunar surface mapping and detailed site characterization for potential landing site selection and resource identification. The LROC WAC is a pushframe camera with five 14-line by 704-sample framelets for visible light bands and two 16-line by 512-sample (summed 4x to 4 by 128) UV bands. The WAC can also acquire monochrome images with a 14-line by 1024-sample format. At the nominal 50-km orbit the visible bands ground scale is 75-m/pixel and the UV 383-m/pixel. Overlapping WAC images from adjacent orbits can be used to map topography at a scale of a few hundred meters. The two panchromatic NAC cameras are pushbroom imaging sensors each with a Cassegrain telescope of a 700-mm focal length. The two NAC cameras are aligned with a small overlap in the cross-track direction so that they cover a 5-km swath with a combined field-of-view (FOV) of 5.6°. At an altitude of 50-km, the NAC can provide panchromatic images from its 5,000-pixel linear CCD at a ground scale of 0.5-m/pixel. Calibration of the cameras was performed by using precision collimator measurements to determine the camera principal points and radial lens distortion. The orientation of the two NAC cameras is estimated by a boresight calibration using double and triple overlapping NAC images of the lunar surface. The resulting calibration results are incorporated into a photogrammetric bundle adjustment (BA), which models the LROC camera imaging geometry, in order to refine the exterior orientation (EO) parameters initially retrieved from the SPICE kernels. Consequently, the improved EO parameters can significantly enhance the quality of topographic products derived from LROC NAC imagery. In addition, an analysis of the spacecraft jitter effect is performed by measuring lunar surface features in the NAC CCD overlapping strip in the image space and object space. Topographic and cartographic data processing results and products derived from LROC NAC and WAC stereo imagery using different software systems from several participating institutions of the LROC team will be presented, including results of calibration, bundle adjustment, jitter analysis, DEM, orthophoto, and cartographic maps.

  5. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  6. LROC NAC Stereo Anaglyphs

    NASA Astrophysics Data System (ADS)

    Mattson, S.; McEwen, A. S.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) acquires high resolution (50 to 200 cm pixel scale) images of the Moon. In operation since June 2009, LROC NAC acquires geometric stereo pairs by rolling off-nadir on subsequent orbits. A new automated processing system currently in development will produce anaglyphs from most of the NAC geometric stereo pairs. An anaglyph is an image formed by placing one image from the stereo pair in the red channel, and the other image from the stereo pair in the green and blue channels, so that together with red-blue or red-cyan glasses, the 3D information in the pair can be readily viewed. These new image products will make qualitative interpretation of the lunar surface in 3D more accessible, without the need for intensive computational resources or special equipment. The LROC NAC is composed of two separate pushbroom CCD cameras (NAC L and R) aligned to increase the full swath width to 5 km from an altitude of 50 km. Development of the anaglyph processing system incorporates stereo viewing geometry, proper alignment of the NAC L and R frames, and optimal contrast normalization of the stereo pair to minimize extreme brightness differences, which can make stereo viewing difficult in an anaglyph. The LROC NAC anaglyph pipeline is based on a similar automated system developed for the HiRISE camera, on the Mars Reconnaissance Orbiter. Improved knowledge of camera pointing and spacecraft position allows for the automatic registration of the L and R frames by map projecting them to a polar stereographic projection. One half of the stereo pair must then be registered to the other so there is no offset in the vertical (y) direction. Stereo viewing depends on parallax only in the horizontal (x) direction. High resolution LROC NAC anaglyphs will be made available to the lunar science community and to the public on the LROC web site (http://lroc.sese.asu.edu).

  7. Investigating at the Moon With new Eyes: The Lunar Reconnaissance Orbiter Mission Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Hiesinger, H.; Robinson, M. S.; McEwen, A. S.; Turtle, E. P.; Eliason, E. M.; Jolliff, B. L.; Malin, M. C.; Thomas, P. C.

    The Lunar Reconnaissance Orbiter Mission Camera (LROC) H. Hiesinger (1,2), M.S. Robinson (3), A.S. McEwen (4), E.P. Turtle (4), E.M. Eliason (4), B.L. Jolliff (5), M.C. Malin (6), and P.C. Thomas (7) (1) Brown Univ., Dept. of Geological Sciences, Providence RI 02912, Harald_Hiesinger@brown.edu, (2) Westfaelische Wilhelms-University, (3) Northwestern Univ., (4) LPL, Univ. of Arizona, (5) Washington Univ., (6) Malin Space Science Systems, (7) Cornell Univ. The Lunar Reconnaissance Orbiter (LRO) mission is scheduled for launch in October 2008 as a first step to return humans to the Moon by 2018. The main goals of the Lunar Reconnaissance Orbiter Camera (LROC) are to: 1) assess meter and smaller- scale features for safety analyses for potential lunar landing sites near polar resources, and elsewhere on the Moon; and 2) acquire multi-temporal images of the poles to characterize the polar illumination environment (100 m scale), identifying regions of permanent shadow and permanent or near permanent illumination over a full lunar year. In addition, LROC will return six high-value datasets such as 1) meter-scale maps of regions of permanent or near permanent illumination of polar massifs; 2) high resolution topography through stereogrammetric and photometric stereo analyses for potential landing sites; 3) a global multispectral map in 7 wavelengths (300-680 nm) to characterize lunar resources, in particular ilmenite; 4) a global 100-m/pixel basemap with incidence angles (60-80 degree) favorable for morphologic interpretations; 5) images of a variety of geologic units at sub-meter resolution to investigate physical properties and regolith variability; and 6) meter-scale coverage overlapping with Apollo Panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972, to estimate hazards for future surface operations. LROC consists of two narrow-angle cameras (NACs) which will provide 0.5-m scale panchromatic images over a 5-km swath, a wide-angle camera (WAC) to acquire images at about 100 m/pixel in seven color bands over a 100-km swath, and a common Sequence and Compressor System (SCS). Each NAC has a 700-mm-focal-length optic that images onto a 5000-pixel CCD line-array, providing a cross-track field-of-view (FOV) of 2.86 degree. The NAC readout noise is better than 100 e- , and the data are sampled at 12 bits. Its internal buffer holds 256 MB of uncompressed data, enough for a full-swath image 25-km long or a 2x2 binned image 100-km long. The WAC has two 6-mm- focal-length lenses imaging onto the same 1000 x 1000 pixel, electronically shuttered CCD area-array, one imaging in the visible/near IR, and the other in the UV. Each has a cross-track FOV of 90 degree. From the nominal 50-km orbit, the WAC will have a resolution of 100 m/pixel in the visible, and a swath width of ˜100 km. The seven-band color capability of the WAC is achieved by color filters mounted directly 1 over the detector, providing different sections of the CCD with different filters [1]. The readout noise is less than 40 e- , and, as with the NAC, pixel values are digitized to 12-bits and may be subsequently converted to 8-bit values. The total mass of the LROC system is about 12 kg; the total LROC power consumption averages at 22 W (30 W peak). Assuming a downlink with lossless compression, LRO will produce a total of 20 TeraBytes (TB) of raw data. Production of higher-level data products will result in a total of 70 TB for Planetary Data System (PDS) archiving, 100 times larger than any previous missions. [1] Malin et al., JGR, 106, 17651-17672, 2001. 2

  8. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, Mark; Hiesinger, Harald; McEwen, Alfred; Jolliff, Brad; Thomas, Peter C.; Turtle, Elizabeth; Eliason, Eric; Malin, Mike; Ravine, A.; Bowman-Cisneros, Ernest

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping mission in a quasi-circular 50 km orbit. A multi-year extended mission in a fixed 30×200 km orbit is optional. The Lunar Reconnaissance Orbiter Camera (LROC) consists of a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). The WAC is a 7-color push-frame camera, which images the Moon at 100 and 400 m/pixel in the visible and UV, respectively, while the two NACs are monochrome narrow-angle linescan imagers with 0.5 m/pixel spatial resolution. LROC was specifically designed to address two of the primary LRO mission requirements and six other key science objectives, including 1) assessment of meter-and smaller-scale features in order to select safe sites for potential lunar landings near polar resources and elsewhere on the Moon; 2) acquire multi-temporal synoptic 100 m/pixel images of the poles during every orbit to unambiguously identify regions of permanent shadow and permanent or near permanent illumination; 3) meter-scale mapping of regions with permanent or near-permanent illumination of polar massifs; 4) repeat observations of potential landing sites and other regions to derive high resolution topography; 5) global multispectral observations in seven wavelengths to characterize lunar resources, particularly ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60° -80° ) favorable for morphological interpretations; 7) sub-meter imaging of a variety of geologic units to characterize their physical properties, the variability of the regolith, and other key science questions; 8) meter-scale coverage overlapping with Apollo-era panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972. LROC allows us to determine the recent impact rate of bolides in the size range of 0.5 to 10 meters, which is currently not well known. Determining the impact rate at these sizes enables engineering remediation measures for future surface operations and interplanetary travel. The WAC has imaged nearly the entire Moon in seven wavelengths. A preliminary global WAC stereo-based topographic model is in preparation [1] and global color processing is underway [2]. As the mission progresses repeat global coverage will be obtained as lighting conditions change providing a robust photometric dataset. The NACs are revealing a wealth of morpho-logic features at the meter scale providing the engineering and science constraints needed to support future lunar exploration. All of the Apollo landing sites have been imaged, as well as the majority of robotic landing and impact sites. Through the use of off-nadir slews a collection of stereo pairs is being acquired that enable 5-m scale topographic mapping [3-7]. Impact mor-phologies (terraces, impact melt, rays, etc) are preserved in exquisite detail at all Copernican craters and are enabling new studies of impact mechanics and crater size-frequency distribution measurements [8-12]. Other topical studies including, for example, lunar pyroclastics, domes, and tectonics are underway [e.g., 10-17]. The first PDS data release of LROC data will be in March 2010, and will include all images from the commissioning phase and the first 3 months of the mapping phase. [1] Scholten et al. (2010) 41st LPSC, #2111; [2] Denevi et al. (2010a) 41st LPSC, #2263; [3] Beyer et al. (2010) 41st LPSC, #2678; [4] Archinal et al. (2010) 41st LPSC, #2609; [5] Mattson et al. (2010) 41st LPSC, #1871; [6] Tran et al. (2010) 41st LPSC, #2515; [7] Oberst et al. (2010) 41st LPSC, #2051; [8] Bray et al. (2010) 41st LPSC, #2371; [9] Denevi et al. (2010b) 41st LPSC, #2582; [10] Hiesinger et al. (2010a) 41st LPSC, #2278; [11] Hiesinger et al. (2010b) 41st LPSC, #2304; [12] van der Bogert et al. (2010) 41st LPSC, #2165; [13] Plescia et al. (2010) 41st LPSC, #2160; [14] Lawrence et al. (2010) 41st LPSC, #1906; [15] Gaddis et al. (2010) 41st LPSC, #2059; [16] Watters et al. (2010) 41st LPSC, #1863; [17] Garry et al. (2010) 41st LPSC, #2278.

  9. Occurrence probability of slopes on the lunar surface: Estimate by the shaded area percentage in the LROC NAC images

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.; Ivanov, M. A.; Kokhanov, A. A.; Karachevtseva, I. P.; Head, J. W.

    2015-09-01

    The paper describes the method of estimating the distribution of slopes by the portion of shaded areas measured in the images acquired at different Sun elevations. The measurements were performed for the benefit of the Luna-Glob Russian mission. The western ellipse for the spacecraft landing in the crater Bogus-lawsky in the southern polar region of the Moon was investigated. The percentage of the shaded area was measured in the images acquired with the LROC NAC camera with a resolution of ~0.5 m. Due to the close vicinity of the pole, it is difficult to build digital terrain models (DTMs) for this region from the LROC NAC images. Because of this, the method described has been suggested. For the landing ellipse investigated, 52 LROC NAC images obtained at the Sun elevation from 4° to 19° were used. In these images the shaded portions of the area were measured, and the values of these portions were transferred to the values of the occurrence of slopes (in this case, at the 3.5-m baseline) with the calibration by the surface characteristics of the Lunokhod-1 study area. For this area, the digital terrain model of the ~0.5-m resolution and 13 LROC NAC images obtained at different elevations of the Sun are available. From the results of measurements and the corresponding calibration, it was found that, in the studied landing ellipse, the occurrence of slopes gentler than 10° at the baseline of 3.5 m is 90%, while it is 9.6, 5.7, and 3.9% for the slopes steeper than 10°, 15°, and 20°, respectively. Obviously, this method can be recommended for application if there is no DTM of required granularity for the regions of interest, but there are high-resolution images taken at different elevations of the Sun.

  10. Photometric parameter maps of the Moon derived from LROC WAC images

    NASA Astrophysics Data System (ADS)

    Sato, H.; Robinson, M. S.; Hapke, B. W.; Denevi, B. W.; Boyd, A. K.

    2013-12-01

    Spatially resolved photometric parameter maps were computed from 21 months of Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) images. Due to a 60° field-of-view (FOV), the WAC achieves nearly global coverage of the Moon each month with more than 50% overlap from orbit-to-orbit. From the repeat observations at various viewing and illumination geometries, we calculated Hapke bidirectional reflectance model parameters [1] for 1°x1° "tiles" from 70°N to 70°S and 0°E to 360°E. About 66,000 WAC images acquired from February 2010 to October 2011 were converted from DN to radiance factor (I/F) though radiometric calibration, partitioned into gridded tiles, and stacked in a time series (tile-by-tile method [2]). Lighting geometries (phase, incidence, emission) were computed using the WAC digital terrain model (100 m/pixel) [3]. The Hapke parameters were obtained by model fitting against I/F within each tile. Among the 9 parameters of the Hapke model, we calculated 3 free parameters (w, b, and hs) by setting constant values for 4 parameters (Bco=0, hc=1, θ, φ=0) and interpolating 2 parameters (c, Bso). In this simplification, we ignored the Coherent Backscatter Opposition Effect (CBOE) to avoid competing CBOE and Shadow Hiding Opposition Effect (SHOE). We also assumed that surface regolith porosity is uniform across the Moon. The roughness parameter (θ) was set to an averaged value from the equator (× 3°N). The Henyey-Greenstein double lobe function (H-G2) parameter (c) was given by the 'hockey stick' relation [4] (negative correlation) between b and c based on laboratory measurements. The amplitude of SHOE (Bso) was given by the correlation between w and Bso at the equator (× 3°N). Single scattering albedo (w) is strongly correlated to the photometrically normalized I/F, as expected. The c shows an inverse trend relative to b due to the 'hockey stick' relation. The parameter c is typically low for the maria (0.08×0.06) relative to the highlands (0.47×0.16). Since c controls the fraction of backward/forward scattering in H-G2, lower c for the maria indicates more forward scattering relative to the highlands. This trend is opposite to what was expected because darker particles are usually more backscattering. However, the lower albedo of the maria is due to the higher abundance of ilmenite, which is an opaque mineral that scatters all of the light by specular reflection from the its surface. If their surface facets are relatively smooth the ilmenite particles will be forward scattering. Other factors (e.g. grain shape, grain size, porosity, maturity) besides the mineralogy might also be affecting c. The angular-width of SHOE (hs) typically shows lower values (0.047×0.02) for the maria relative to the highlands (0.074×0.025). An increase in hs for the maria theoretically suggests lower porosity or a narrower grain size distribution [1], but the link between actual materials and hs is not well constrained. Further experiments using both laboratory and spacecraft observations will help to unravel the photometric properties of the surface materials of the Moon. [1] Hapke, B.: Cambridge Univ. Press, 2012. [2] Sato, H. et al.: 42nd LPSC, abstract #1974, 2011. [3] Scholten, F. et al.: JGR, 117, E00H17, 2012. [4] Hapke, B.: Icarus, 221(2), p1079-1083, 2012.

  11. Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images

    NASA Astrophysics Data System (ADS)

    Singer, K. N.; Jolliff, B. L.; McKinnon, W. B.

    2013-12-01

    Title: Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images Authors: Kelsi N. Singer1, Bradley L. Jolliff1, and William B. McKinnon1 Affiliations: 1. Earth and Planetary Sciences, Washington University in St Louis, St. Louis, MO, United States. We report results from analyzing the size-velocity distribution (SVD) of secondary crater forming fragments from the 93 km diameter Copernicus impact. We measured the diameters of secondary craters and their distances from Copernicus using LROC Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) image data. We then estimated the velocity and size of the ejecta fragment that formed each secondary crater from the range equation for a ballistic trajectory on a sphere and Schmidt-Holsapple scaling relations. Size scaling was carried out in the gravity regime for both non-porous and porous target material properties. We focus on the largest ejecta fragments (dfmax) at a given ejection velocity (υej) and fit the upper envelope of the SVD using quantile regression to an equation of the form dfmax = A*υej ^- β. The velocity exponent, β, describes how quickly fragment sizes fall off with increasing ejection velocity during crater excavation. For Copernicus, we measured 5800 secondary craters, at distances of up to 700 km (15 crater radii), corresponding to an ejecta fragment velocity of approximately 950 m/s. This mapping only includes secondary craters that are part of a radial chain or cluster. The two largest craters in chains near Copernicus that are likely to be secondaries are 6.4 and 5.2 km in diameter. We obtained a velocity exponent, β, of 2.2 × 0.1 for a non-porous surface. This result is similar to Vickery's [1987, GRL 14] determination of β = 1.9 × 0.2 for Copernicus using Lunar Orbiter IV data. The availability of WAC 100 m/pix global mosaics with illumination geometry optimized for morphology allows us to update and extend the work of Vickery [1986, Icarus 67, and 1987], who compared secondary crater SVDs for craters on the Moon, Mercury, and Mars. Additionally, meter-scale NAC images enable characterization of secondary crater morphologies and fields around much smaller primary craters than were previously investigated. Combined results from all previous studies of ejecta fragment SVDs from secondary crater fields show that β ranges between approximately 1 and 3. First-order spallation theory predicts a β of 1 [Melosh 1989, Impact Cratering, Oxford Univ. Press]. Results in Vickery [1987] for the Moon exhibit a generally decreasing β with increasing primary crater size (5 secondary fields mapped). In the same paper, however, this trend is flat for Mercury (3 fields mapped) and opposite for Mars (4 fields mapped). SVDs for craters on large icy satellites (Ganymede and Europa), with gravities not too dissimilar to lunar gravity, show generally low velocity exponents (β between 1 and 1.5), except for the very largest impactor measured: the 585-km-diameter Gilgamesh basin on Ganymede (β = 2.6 × 0.4) [Singer et al., 2013, Icarus 226]. The present work, focusing initially on lunar craters using LROC data, will attempt to confirm or clarify these trends, and expand the number of examples under a variety of impact conditions and surface materials to evaluate possible causes of variations.

  12. LROC Advances in Lunar Science

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.

    2012-12-01

    Since entering orbit in 2009 the Lunar Reconnaissance Orbiter Camera (LROC) has acquired over 700,000 Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) images of the Moon. This new image collection is fueling research into the origin and evolution of the Moon. NAC images revealed a volcanic complex 35 x 25 km (60N, 100E), between Compton and Belkovich craters (CB). The CB terrain sports volcanic domes and irregular depressed areas (caldera-like collapses). The volcanic complex corresponds to an area of high-silica content (Diviner) and high Th (Lunar Prospector). A low density of impact craters on the CB complex indicates a relatively young age. The LROC team mapped over 150 volcanic domes and 90 volcanic cones in the Marius Hills (MH), many of which were not previously identified. Morphology and compositional estimates (Diviner) indicate that MH domes are silica poor, and are products of low-effusion mare lavas. Impact melt deposits are observed with Copernican impact craters (>10 km) on exterior ejecta, the rim, inner wall, and crater floors. Preserved impact melt flow deposits are observed around small craters (25 km diam.), and estimated melt volumes exceed predictions. At these diameters the amount of melt predicted is small, and melt that is produced is expected to be ejected from the crater. However, we observe well-defined impact melt deposits on the floor of highland craters down to 200 m diameter. A globally distributed population of previously undetected contractional structures were discovered. Their crisp appearance and associated impact crater populations show that they are young landforms (<1 Ga). NAC images also revealed small extensional troughs. Crosscutting relations with small-diameter craters and depths as shallow as 1 m indicate ages <50 Ma. These features place bounds on the amount of global radial contraction and the level of compressional stress in the crust. WAC temporal coverage of the poles allowed quantification of highly illuminated regions, including one site that remains lit for 94% of a year (longest eclipse period of 43 hours). Targeted NAC images provide higher resolution characterization of key sites with permanent shadow and extended illumination. Repeat WAC coverage provides an unparalleled photometric dataset allowing spatially resolved solutions (currently 1 degree) to Hapke's photometric equation - data invaluable for photometric normalization and interpreting physical properties of the regolith. The WAC color also provides the means to solve for titanium, and distinguish subtle age differences within Copernican aged materials. The longevity of the LRO mission allows follow up NAC and WAC observations of previously known and newly discovered targets over a range of illumination and viewing geometries. Of particular merit is the acquisition of NAC stereo pairs and oblique sequences. With the extended SMD phase, the LROC team is working towards imaging the whole Moon with pixel scales of 50 to 200 cm.

  13. Dry imaging cameras.

    PubMed

    Indrajit, Ik; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-04-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  14. Combined collimator/reconstruction optimization for myocardial perfusion SPECT imaging using polar map-based LROC numerical observer

    NASA Astrophysics Data System (ADS)

    Konate, Souleymane; Pretorius, P. Hendrik; Gifford, Howard C.; O'Connor, J. Michael; Konik, Arda; Shazeeb, Mohammed Salman; King, Michael A.

    2012-02-01

    Polar maps have been used to assist clinicians diagnose coronary artery diseases (CAD) in single photon emission computed tomography (SPECT) myocardial perfusion imaging. Herein, we investigate the optimization of collimator design for perfusion defect detection in SPECT imaging when reconstruction includes modeling of the collimator. The optimization employs an LROC clinical model observer (CMO), which emulates the clinical task of polar map detection of CAD. By utilizing a CMO, which better mimics the clinical perfusion-defect detection task than previous SKE based observers, our objective is to optimize collimator design for SPECT myocardial perfusion imaging when reconstruction includes compensation for collimator spatial resolution. Comparison of lesion detection accuracy will then be employed to determine if a lower spatial resolution hence higher sensitivity collimator design than currently recommended could be utilized to reduce the radiation dose to the patient, imaging time, or a combination of both. As the first step in this investigation, we report herein on the optimization of the three-dimensional (3D) post-reconstruction Gaussian filtering of and the number of iterations used to reconstruct the SPECT slices of projections acquired by a low-energy generalpurpose (LEGP) collimator. The optimization was in terms of detection accuracy as determined by our CMO and four human observers. Both the human and all four CMO variants agreed that the optimal post-filtering was with sigma of the Gaussian in the range of 0.75 to 1.0 pixels. In terms of number of iterations, the human observers showed a preference for 5 iterations; however, only one of the variants of the CMO agreed with this selection. The others showed a preference for 15 iterations. We shall thus proceed to optimize the reconstruction parameters for even higher sensitivity collimators using this CMO, and then do the final comparison between collimators using their individually optimized parameters with human observers and three times the test images to reduce the statistical variation seen in our present results.

  15. Apollo 17 Landing Site: A Cartographic Investigation of the Taurus-Littrow Valley Based on LROC NAC Imagery

    NASA Astrophysics Data System (ADS)

    Haase, I.; Wählisch, M.; Gläser, P.; Oberst, J.; Robinson, M. S.

    2014-04-01

    A Digital Terrain Model (DTM) of the Taurus- Littrow Valley with a 1.5 m/pixel resolution was derived from high resolution stereo images of the Lunar Reconnaissance Orbiter Narrow Angle Camera (LROC NAC) [1]. It was used to create a controlled LROC NAC ortho-mosaic with a pixel size of 0.5 m on the ground. Covering the entire Apollo 17 exploration site, it allows for determining accurate astronaut and surface feature positions along the astronauts' traverses when integrating historic Apollo surface photography to our analysis.

  16. Exploring the Moon with LROC-NAC Stereo Anaglyphs

    NASA Astrophysics Data System (ADS)

    Mattson, S.; McEwen, A. S.; Robinson, M. S.; Speyerer, E.; Archinal, B.

    2012-09-01

    The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) operating on the Lunar Reconnaissance Orbiter (LRO), has returned over 500,000 high resolution images of the surface of the Moon since 2009 [1]. The NAC acquires geometric stereo image pairs of the same surface target on subsequent orbits by rolling the spacecraft off-nadir to achieve stereo convergence. Stereo pairs are generally acquired close in time (2 to 4 hrs), to minimize photometric differences. An anaglyph is a qualitative stereo visualization product formed by putting one image from the stereo pair in the red channel, and the other image in the blue and green channels, so that together the pair can be viewed in 3D using red-blue or red-cyan glasses. LROC NAC anaglyphs are produced automatically, so the stereo information is readily interpretable, in a qualitative sense, without the need for intensive computational and personnel resources, such as is required to make digital terrain models (DTM).

  17. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  18. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  19. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  20. Mapping the Apollo 17 Astronauts' Positions Based on LROC Data and Apollo Surface Photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Gläser, P.; Wählisch, M.; Robinson, M. S.

    2011-10-01

    The positions from where the Apollo 17 astronauts recorded panoramic image series, e.g. at the so-called "traverse stations", were precisely determined using ortho-images (0.5 m/pxl) as well as Digital Terrain Models (DTM) (1.5 m/pxl and 100 m/pxl) derived from Lunar Reconnaissance Orbiter Camera (LROC) data. Features imaged in the Apollo panoramas were identified in LROC ortho-images. Least-squares techniques were applied to angles measured in the panoramas to determine the astronaut's position to within the ortho-image pixel. The result of our investigation of Traverse Station 1 in the north-west of Steno Crater is presented.

  1. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  2. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  3. LROC Observations of Geologic Features in the Marius Hills

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Stopar, J. D.; Hawke, R. B.; Denevi, B. W.; Robinson, M. S.; Giguere, T.; Jolliff, B. L.

    2009-12-01

    Lunar volcanic cones, domes, and their associated geologic features are important objects of study for the LROC science team because they represent possible volcanic endmembers that may yield important insights into the history of lunar volcanism and are potential sources of lunar resources. Several hundred domes, cones, and associated volcanic features are currently targeted for high-resolution LROC Narrow Angle Camera [NAC] imagery[1]. The Marius Hills, located in Oceanus Procellarum (centered at ~13.4°N, -55.4°W), represent the largest concentration of these volcanic features on the Moon including sinuous rilles, volcanic cones, domes, and depressions [e.g., 2-7]. The Marius region is thus a high priority for future human lunar exploration, as signified by its inclusion in the Project Constellation list of notional future human lunar exploration sites [8], and will be an intense focus of interest for LROC science investigations. Previous studies of the Marius Hills have utilized telescopic, Lunar Orbiter, Apollo, and Clementine imagery to study the morphology and composition of the volcanic features in the region. Complementary LROC studies of the Marius region will focus on high-resolution NAC images of specific features for studies of morphology (including flow fronts, dome/cone structure, and possible layering) and topography (using stereo imagery). Preliminary studies of the new high-resolution images of the Marius Hills region reveal small-scale features in the sinuous rilles including possible outcrops of bedrock and lobate lava flows from the domes. The observed Marius Hills are characterized by rough surface textures, including the presence of large boulders at the summits (~3-5m diameter), which is consistent with the radar-derived conclusions of [9]. Future investigations will involve analysis of LROC stereo photoclinometric products and coordinating NAC images with the multispectral images collected by the LROC WAC, especially the ultraviolet data, to enable measurements of color variations within and amongst deposits and provide possible compositional insights, including the location of possibly related pyroclastic deposits. References: [1] J. D. Stopar et al. (2009), LRO Science Targeting Meeting, Abs. 6039 [2] Greeley R (1971) Moon, 3, 289-314 [3] Guest J. E. (1971) Geol. and Phys. of the Moon, p. 41-53. [4] McCauley J. F. (1967) USGS Geologic Atlas of the Moon, Sheet I-491 [5] Weitz C. M. and Head J. W. (1999) JGR, 104, 18933-18956 [6] Heather D. J. et al. (2003) JGR, doi:10.1029/2002JE001938 [7] Whitford-Stark, J. L., and J. W. Head (1977) Proc. LSC 8th, 2705-2724 [8] Gruener J. and Joosten B. K. (2009) LRO Science Targeting Meeting, Abs. 6036 [9] Campbell B. A. et al. (2009) JGR, doi:10.1029/2008JE003253.

  4. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  5. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  6. Impact melt volume estimates in small-to-medium sized craters on the Moon from the Lunar Orbiter Laser Altimeter (LOLA) and Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Barnouin, O. S.; Seelos, K. D.; McGovern, A.; Denevi, B. W.; Zuber, M. T.; Smith, D. E.; Robinson, M. S.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.

    2010-12-01

    Direct measurements of the volume of melt generated during cratering have only been possible using data acquired at terrestrial craters. These measurements are usually the result of areal mapping efforts, drill core investigations, and assessments of the amount of erosion a crater and its melt sheet might have undergone. Good data for melt volume are needed to further test and validate both analytical and numerical models of melt generation on terrestrial planets, whose results can vary by as much as a factor of 10 for identical impact conditions. Such models are used to provide estimates of the depth of origin of surface features (e.g., central peaks and rings) seen within craters and could influence the interpretations of their diameter-to-depth relationships. For example, high velocity impacts (>30km/s) on Mercury are expected to produce significant melt volumes, which could influence crater aspect ratio. The Lunar Reconnaissance Orbiter has now returned a wealth of new data, including those from small-to-medium sized fresh craters on the Moon (1kmLROC observations and LOLA altimetry (spatial sampling ~56m, vertical precision =10cm) are of such good quality that additional new melt volume estimates can be obtained for many of these craters. Using geological maps from the Apollo era, we have identified over 100 fresh crater candidates for investigation. Preliminary results indicate that melt volumes can vary significantly for given crater sizes, sometimes even exceeding estimates from current numerical and analytical models in the literature for impacts on the Moon. The broad range of observed melt volumes might be due to local variations in the target properties (including density, composition and porosity), projectile speeds and possibly projectile properties, the range of which the theoretical models do not typically consider.

  7. Height-to-diameter ratios of moon rocks from analysis of Lunokhod-1 and -2 and Apollo 11-17 panoramas and LROC NAC images

    NASA Astrophysics Data System (ADS)

    Demidov, N. E.; Basilevsky, A. T.

    2014-09-01

    An analysis is performed of 91 panoramic photographs taken by Lunokhod-1 and -2, 17 panoramic images composed of photographs taken by Apollo 11-15 astronauts, and six LROC NAC photographs. The results are used to measure the height-to-visible-diameter ( h/ d) and height-to-maximum-diameter ( h/ D) ratios for lunar rocks at three highland and three mare sites on the Moon. The average h/ d and h/ D for the six sites are found to be indistinguishable at a significance level of 95%. Therefore, our estimates for the average h/ d = 0.6 ± 0.03 and h/ D = 0.54 ± 0.03 on the basis of 445 rocks are applicable for the entire Moon's surface. Rounding off, an h/ D ratio of ≈0.5 is suggested for engineering models of the lunar surface. The ratios between the long, medium, and short axes of the lunar rocks are found to be similar to those obtained in high-velocity impact experiments for different materials. It is concluded, therefore, that the degree of penetration of the studied lunar rocks into the regolith is negligible, and micrometeorite abrasion and other factors do not dominate in the evolution of the shape of lunar rocks.

  8. Morphological Analysis of Lunar Lobate Scarps Using LROC NAC and LOLA Data

    NASA Astrophysics Data System (ADS)

    Banks, M. E.; Watters, T. R.; Robinson, M. S.; Tornabene, L. L.; Tran, T.; Ojha, L.

    2011-10-01

    Lobate scarps on the Moon are relatively smallscale tectonic landforms observed in mare basalts and more commonly, highland material [1-4]. These scarps are the surface expression of thrust faults, and are the most common tectonic landform on the lunar farside [1-4]. Prior to Lunar Reconnaissance Orbiter (LRO) observations, lobate scarps were largely detected only in equatorial regions because of limited Apollo Panoramic Camera and high resolution Lunar Orbiter coverage with optimum lighting geometry [1-3]. Previous measurements of the relief of lobate scarps were made for 9 low-latitude scarps (<±20°), and range from ~6 to 80 m (mean relief of ~32 m) [1]. However, the relief of these scarps was primarily determined from shadow measurements with limited accuracy from Apollo-era photography. We present the results of a detailed characterization of the relief and morphology of a larger sampling of the population of lobate scarps. Outstanding questions include what is the range of maximum relief of the lobate scarps? Is their size and structural relief consistent with estimates of the global contractional strain? What is the range of horizontal shortening expressed by lunar scarps and how does this range compare with that found for planetary lobate scarps? Lunar Reconnaissance Orbiter Camera (LROC) images and Lunar Orbiter Laser Altimeter (LOLA) ranging enable detection and detailed morphological analysis of lobate scarps at all latitudes. To date, previously undetected scarps have been identified in LROC imagery in 75 different locations, over 20 of which occur at latitudes greater than ±60° [5-6]. LROC stereo-derived digital terrain models (DTMs) and LOLA data are used to measure the relief and characterize the morphology of 26 previously known (n = 8) and newly detected (n = 18) lobate scarps. Lunar examples are compared to lobate scarps on Mars, Mercury, and 433 Eros (Hinks Dorsum).

  9. Marius Hills: Surface Roughness from LROC and Mini-RF

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Hawke, B. R.; Bussey, B.; Stopar, J. D.; Denevi, B.; Robinson, M.; Tran, T.

    2010-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Team is collecting hundreds of high-resolution (0.5 m/pixel) Narrow Angle Camera (NAC) images of lunar volcanic constructs (domes, “cones”, and associated features) [1,2]. Marius Hills represents the largest concentration of volcanic features on the Moon and is a high-priority target for future exploration [3,4]. NAC images of this region provide new insights into the morphology and geology of specific features at the meter scale, including lava flow fronts, tectonic features, layers, and topography (using LROC stereo imagery) [2]. Here, we report initial results from Mini-RF and LROC collaborative studies of the Marius Hills. Mini-RF uses a hybrid polarimetric architecture to measure surface backscatter characteristics and can acquire data in one of two radar bands, S (12 cm) or X (4 cm) [5]. The spatial resolution of Mini-RF (15 m/pixel) enables correlation of features observed in NAC images to Mini-RF data. Mini-RF S-Band zoom-mode data and daughter products, such as circular polarization ratio (CPR), were directly compared to NAC images. Mini-RF S-Band radar images reveal enhanced radar backscatter associated with volcanic constructs in the Marius Hills region. Mini-RF data show that Marius Hills volcanic constructs have enhanced average CPR values (0.5-0.7) compared to the CPR values of the surrounding mare (~0.4). This result is consistent with the conclusions of [6], and implies that the lava flows comprising the domes in this region are blocky. To quantify the surface roughness [e.g., 6,7] block populations associated with specific geologic features in the Marius Hills region are being digitized from NAC images. Only blocks that can be unambiguously identified (>1 m diameter) are included in the digitization process, producing counts and size estimates of the block population. High block abundances occur mainly at the distal ends of lava flows. The average size of these blocks is 9 m, and 50% of observed blocks are between 9-12 m in diameter. These blocks are not associated with impact craters and have at most a thin layer of regolith. There is minimal visible evidence for downslope movement. Relatively high block abundances are also seen on the summits of steep-sided asymmetrical positive relief features (“cones”) atop low-sided domes. Digitization efforts will continue as we study the block populations of different geologic features in the Marius Hills region and correlate the results with Mini-RF data, which will provide new information about the emplacement of volcanic features in the region. [1] J.D. Stopar et al., LPI Contribution 1483 (2009) 93-94. [2] S.J. Lawrence et al. (2010) LPSC 41 #1906. [2] S.J. Lawrence et al. (2010) LPSC 41 # 2689. [3] C. Coombs & B.R. Hawke (1992) 2nd Proc. Lun. Bases & Space Act. 21st Cent pp. 219-229. [4]J.Gruener and B. Joosten (2009) LPI Contributions 1483 50-51. [5] D.B.J. Bussey et al. (2010) LPSC 41 # 2319. [6] B.A. Campbell et al. (2009) JGR-Planets, 114, 01001. [7] S.W. Anderson et al. (1998) GSA Bull, 110, 1258-1267.

  10. Full Stokes polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2011-10-01

    Objective and background: We present a new version of Bossa Nova Technologies' passive polarization imaging camera. The previous version was performing live measurement of the Linear Stokes parameters (S0, S1, S2), and its derivatives. This new version presented in this paper performs live measurement of Full Stokes parameters, i.e. including the fourth parameter S3 related to the amount of circular polarization. Dedicated software was developed to provide live images of any Stokes related parameters such as the Degree Of Linear Polarization (DOLP), the Degree Of Circular Polarization (DOCP), the Angle Of Polarization (AOP). Results: We first we give a brief description of the camera and its technology. It is a Division Of Time Polarimeter using a custom ferroelectric liquid crystal cell. A description of the method used to calculate Data Reduction Matrix (DRM)5,9 linking intensity measurements and the Stokes parameters is given. The calibration was developed in order to maximize the condition number of the DRM. It also allows very efficient post processing of the images acquired. Complete evaluation of the precision of standard polarization parameters is described. We further present the standard features of the dedicated software that was developed to operate the camera. It provides live images of the Stokes vector components and the usual associated parameters. Finally some tests already conducted are presented. It includes indoor laboratory and outdoor measurements. This new camera will be a useful tool for many applications such as biomedical, remote sensing, metrology, material studies, and others.

  11. Multispectral image dissector camera system

    NASA Technical Reports Server (NTRS)

    Johnson, B. L.

    1972-01-01

    A sensor system which provides registered high-solution multispectral images from a single sensor with no mechanical moving parts is reported, and the operation of an image dissector camera (IDC) is described. An earth scene 100 nautical miles wide is imaged through a single lens onto a photocathode surface containing three spectral filters, thereby producing three separate spectral signatures on the photocathode surface. An electron image is formed, accelerated, focused, and electromagnetically, deflected across an image plane which contains three sampling apertures, behind which are located three electron multipliers. The IDC system uses electromagnetic deflection for cross-track scanning and spacecraft orbit motion for along-track scanning, thus eliminating the need for a mechanical scanning mirror.

  12. Computational imaging for miniature cameras

    NASA Astrophysics Data System (ADS)

    Salahieh, Basel

    Miniature cameras play a key role in numerous imaging applications ranging from endoscopy and metrology inspection devices to smartphones and head-mount acquisition systems. However, due to the physical constraints, the imaging conditions, and the low quality of small optics, their imaging capabilities are limited in terms of the delivered resolution, the acquired depth of field, and the captured dynamic range. Computational imaging jointly addresses the imaging system and the reconstructing algorithms to bypass the traditional limits of optical systems and deliver better restorations for various applications. The scene is encoded into a set of efficient measurements which could then be computationally decoded to output a richer estimate of the scene as compared with the raw images captured by conventional imagers. In this dissertation, three task-based computational imaging techniques are developed to make low-quality miniature cameras capable of delivering realistic high-resolution reconstructions, providing full-focus imaging, and acquiring depth information for high dynamic range objects. For the superresolution task, a non-regularized direct superresolution algorithm is developed to achieve realistic restorations without being penalized by improper assumptions (e.g., optimizers, priors, and regularizers) made in the inverse problem. An adaptive frequency-based filtering scheme is introduced to upper bound the reconstruction errors while still producing more fine details as compared with previous methods under realistic imaging conditions. For the full-focus imaging task, a computational depth-based deconvolution technique is proposed to bring a scene captured by an ordinary fixed-focus camera to a full-focus based on a depth-variant point spread function prior. The ringing artifacts are suppressed on three levels: block tiling to eliminate boundary artifacts, adaptive reference maps to reduce ringing initiated by sharp edges, and block-wise deconvolution or depth-based masking to suppress artifacts initiated by neighboring depth-transition surfaces. Finally for the depth acquisition task, a multi-polarization fringe projection imaging technique is introduced to eliminate saturated points and enhance the fringe contrast by selecting the proper polarized channel measurements. The developed technique can be easily extended to include measurements captured under different exposure times to obtain more accurate shape rendering for very high dynamic range objects.

  13. Comparison of human- and model-observer LROC studies

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.; Pretorius, P. H.; King, Michael A.

    2003-05-01

    We have investigated whether extensions of linear model observers can predict human performance in a localization ROC (LROC) study. The specific task was detection of gallium-avid tumors in SPECT images of a mathematical phantom, and the study was intended to quantify the effect of improved detector energy resolution on scatter-corrected images. The basis for our model observers is the latent perception measurement postulated for the LROC model. This measurement is obtained by cross-correlating the image with a kernel, and the LROC rating and localization data are the max and argmax, respectively, of this measurement made at all relevant search locations. The particular model observers tested were the nonprewhitening (NPW), channelized NPW (CNPW), and channelized Hotelling (CH) observers. Specification of the observer's search region was also part of the task definition, and several variations were considered that could approximate the training of human observers. The best agreement with the human observers was found with the CNPW observer, suggesting that the ability of human observers to prewhiten images may be degraded when the detection task requires signal localization.

  14. Uncertainty Analysis of LROC NAC Derived Elevation Models

    NASA Astrophysics Data System (ADS)

    Burns, K.; Yates, D. G.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) [1] is to gather stereo observations with the Narrow Angle Camera (NAC) to generate digital elevation models (DEMs). From an altitude of 50 km, the NAC acquires images with a pixel scale of 0.5 meters, and a dual NAC observation covers approximately 5 km cross-track by 25 km down-track. This low altitude was common from September 2009 to December 2011. Images acquired during the commissioning phase and those acquired from the fixed orbit (after 11 December 2011) have pixel scales that range from 0.35 meters at the south pole to 2 meters at the north pole. Alimetric observations obtained by the Lunar Orbiter Laser Altimeter (LOLA) provide measurements of ±0.1 m between the spacecraft and the surface [2]. However, uncertainties in the spacecraft positioning can result in offsets (±20m) between altimeter tracks over many orbits. The LROC team is currently developing a tool to automatically register alimetric observations to NAC DEMs [3]. Using a generalized pattern search (GPS) algorithm, the new automatic registration adjusts the spacecraft position and pointing information during times when NAC images, as well as LOLA measurements, of the same region are acquired to provide an absolute reference frame for the DEM. This information is then imported into SOCET SET to aide in creating controlled NAC DEMs. For every DEM, a figure of merit (FOM) map is generated using SOCET SET software. This is a valuable tool for determining the relative accuracy of a specific pixel in a DEM. Each pixel in a FOM map is given a value to determine its "quality" by determining if the specific pixel was shadowed, saturated, suspicious, interpolated/extrapolated, or successfully correlated. The overall quality of a NAC DEM is a function of both the absolute and relative accuracies. LOLA altimetry provides the most accurate absolute geodetic reference frame with which the NAC DEMs can be compared. Offsets between LOLA profiles and NAC DEMs are used to quantify the absolute accuracy. Small lateral movements in the LOLA points coupled with large changes in topography contribute to sizeable offsets between the datasets. The steep topography of Lichtenberg Crater provides an example of the offsets in the LOLA data. Ten tracks that cross the region of interest were used to calculate the offset with a root mean square (RMS) error of 9.67 m, an average error of 7.02 m, and a standard deviation of 9.61m. Large areas (>375 km sq) covered by a mosaic of NAC DEMs were compared to the Wide Angel Camera (WAC) derived Global Lunar DTM 100 m topographic model (GLD100) [4]. The GLD100 has a pixel scale of 100 m; therefore, the NAC DEMs were reduced to calculate the offsets between two datasets. When comparing NAC DEMs to WAC DEMs, it was determined that the vertical offsets were as follows [Site name (average offset in meters, standard deviation in meters)]: Lichtenberg Crater (-7.74, 20.49), Giordano Bruno (-5.31, 28.80), Hortensius Domes (-3.52, 16.00), and Reiner Gamma (-0.99,14.11). Resources: [1] Robinson et al. (2010) Space Sci. Rev. [2] Smith et al. (2010) Space Sci. Rev. [3]Speyerer et al. (2012) European Lunar Symp. [4] Scholten et al. (2012) JGR-Planets.

  15. Morphology and Composition of Localized Lunar Dark Mantle Deposits With LROC Data

    NASA Astrophysics Data System (ADS)

    Gustafson, O.; Bell, J. F.; Gaddis, L. R.; Hawke, B. R.; Robinson, M. S.; LROC Science Team

    2010-12-01

    Clementine color (ultraviolet, visible or UVVIS) and Lunar Reconnaissance Orbiter (LRO) Wide Angle (WAC) and Narrow Angle (NAC) camera data provide the means to investigate localized lunar dark-mantle deposits (DMDs) of potential pyroclastic origin. Our goals are to (1) examine the morphology and physical characteristics of these deposits with LROC WAC and NAC data; (2) extend methods used in earlier studies of lunar DMDs with Clementine spectral reflectance (CSR) data; (3) use LRO WAC multispectral data to complement and extend the CSR data for compositional analyses; and (4) apply these results to identify the likely mode of emplacement and study the diversity of compositions among these deposits. Pyroclastic deposits have been recognized all across the Moon, identified by their low albedo, smooth texture, and mantling relationship to underlying features. Gaddis et al. (2003) presented a compositional analysis of 75 potential lunar pyroclastic deposits (LPDs) based on CSR measurements. New LRO camera (LROC) data permit more extensive analyses of such deposits than previously possible. Our study began with six sites on the southeastern limb of the Moon that contain nine of the cataloged 75 potential pyroclastic deposits: Humboldt (4 deposits), Petavius, Barnard, Abel B, Abel C, and Titius. Our analysis found that some of the DMDs exhibit qualities characteristic of fluid emplacement, such as flat surfaces, sharp margins, embaying relationships, and flow textures. We conclude that the localized DMDs are a complex class of features, many of which may have formed by a combination of effusive and pyroclastic emplacement mechanisms. We have extended this analysis to include additional localized DMDs from the catalog of 75 potential pyroclastic deposits. We have examined high resolution (up to 0.5 m/p) NAC images as they become available to assess the mode of emplacement of the deposits, locate potential volcanic vents, and assess physical characteristics of the DMDs such as thickness, roughness, and rock abundance. Within and around each DMD, the Clementine UVVIS multispectral mosaic (100 m/p, 5 bands at 415, 750, 900, 950, and 1000 nm) and LROC WAC multispectral image cubes (75 to 400 m/p, 7 bands at 320, 360, 415, 565, 605, 645, and 690 nm) have been used to extract spectral reflectance data. Spectral ratio plots were prepared to compare deposits and draw conclusions regarding compositional differences, such as mafic mineral or titanium content and distribution, both within and between DMDs. The result of the study will be an improved classification of these deposits in terms of emplacement mechanisms and composition, including identifying compositional affinities among DMDs and between DMDs and other volcanic deposits.

  16. On an assessment of surface roughness estimates from lunar laser altimetry pulse-widths for the Moon from LOLA using LROC narrow-angle stereo DTMs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Poole, William

    2013-04-01

    Neumann et al. [1] proposed that laser altimetry pulse-widths could be employed to derive "within-footprint" surface roughness as opposed to surface roughness estimated from between laser altimetry pierce-points such as the example for Mars [2] and more recently from the 4-pointed star-shaped LOLA (Lunar reconnaissance Orbiter Laser Altimeter) onboard the NASA-LRO [3]. Since 2009, the LOLA has been collecting extensive global laser altimetry data with a 5m footprint and ?25m between the 5 points in a star-shape. In order to assess how accurately surface roughness (defined as simple RMS after slope correction) derived from LROC matches with surface roughness derived from LOLA footprints, publicly released LROC-NA (LRO Camera Narrow Angle) 1m Digital Terrain Models (DTMs) were employed to measure the surface roughness directly within each 5m footprint. A set of 20 LROC-NA DTMs were examined. Initially the match-up between the LOLA and LROC-NA orthorectified images (ORIs) is assessed visually to ensure that the co-registration is better than the LOLA footprint resolution. For each LOLA footprint, the pulse-width geolocation is then retrieved and this is used to "cookie-cut" the surface roughness and slopes derived from the LROC-NA DTMs. The investigation which includes data from a variety of different landforms shows little, if any correlation between surface roughness estimated from DTMs with LOLA pulse-widths at sub-footprint scale. In fact there is only any perceptible correlation between LOLA and LROC-DTMs at baselines of 40-60m for surface roughness and 20m for slopes. [1] Neumann et al. Mars Orbiter Laser Altimeter pulse width measurements and footprint-scale roughness. Geophysical Research Letters (2003) vol. 30 (11), paper 1561. DOI: 10.1029/2003GL017048 [2] Kreslavsky and Head. Kilometer-scale roughness of Mars: results from MOLA data analysis. J Geophys Res (2000) vol. 105 (E11) pp. 26695-26711. [3] Rosenburg et al. Global surface slopes and roughness of the Moon from the Lunar Orbiter Laser Altimeter. Journal of Geophysical Research (2011) vol. 116, paper E02001. DOI: 10.1029/2010JE003716 [4] Chin et al. Lunar Reconnaissance Orbiter Overview: The Instrument Suite and Mission. Space Science Reviews (2007) vol. 129 (4) pp. 391-419

  17. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  18. AIM: Ames Imaging Module Spacecraft Camera

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  19. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  20. Multiplex imaging with multiple-pinhole cameras

    NASA Technical Reports Server (NTRS)

    Brown, C.

    1974-01-01

    When making photographs in X rays or gamma rays with a multiple-pinhole camera, the individual images of an extended object such as the sun may be allowed to overlap. Then the situation is in many ways analogous to that in a multiplexing device such as a Fourier spectroscope. Some advantages and problems arising with such use of the camera are discussed, and expressions are derived to describe the relative efficacy of three exposure/postprocessing schemes using multiple-pinhole cameras.

  1. Single-Camera Panoramic-Imaging Systems

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L.; Gilbert, John

    2007-01-01

    Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

  2. LRO Camera Imaging of Potential Landing Sites in the South Pole-Aitken Basin

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.; Wiseman, S. M.; Gibson, K. E.; Lauber, C.; Robinson, M.; Gaddis, L. R.; Scholten, F.; Oberst, J.; LROC Science; Operations Team

    2010-12-01

    We show results of WAC (Wide Angle Camera) and NAC (Narrow Angle Camera) imaging of candidate landing sites within the South Pole-Aitken (SPA) basin of the Moon obtained by the Lunar Reconnaissance Orbiter during the first full year of operation. These images enable a greatly improved delineation of geologic units, determination of unit thicknesses and stratigraphy, and detailed surface characterization that has not been possible with previous data. WAC imaging encompasses the entire SPA basin, located within an area ranging from ~ 130-250 degrees east longitude and ~15 degrees south latitude to the South Pole, at different incidence angles, with the specific range of incidence dependent on latitude. The WAC images show morphology and surface detail at better than 100 m per pixel, with spatial coverage and quality unmatched by previous data sets. NAC images reveal details at the sub-meter pixel scale that enable new ways to evaluate the origins and stratigraphy of deposits. Key among new results is the capability to discern extents of ancient volcanic deposits that are covered by later crater ejecta (cryptomare) [see Petro et al., this conference] using new, complementary color data from Kaguya and Chandrayaan-1. Digital topographic models derived from WAC and NAC geometric stereo coverage show broad intercrater-plains areas where slopes are acceptably low for high-probability safe landing [see Archinal et al., this conference]. NAC images allow mapping and measurement of small, fresh craters that excavated boulders and thus provide information on surface roughness and depth to bedrock beneath regolith and plains deposits. We use these data to estimate deposit thickness in areas of interest for landing and potential sample collection to better understand the possible provenance of samples. Also, small regions marked by fresh impact craters and their associated boulder fields are readily identified by their bright ejecta patterns and marked as lander keep-out zones. We will show examples of LROC data including those for Constellation sites on the SPA rim and interior, a site between Bose and Alder Craters, sites east of Bhabha Crater, and sites on and near the “Mafic Mound” [see Pieters et al., this conference]. Together the LROC data and complementary products provide essential information for ensuring identification of safe landing and sampling sites within SPA basin that has never before been available for a planetary mission.

  3. Lroc Observations of Permanently Shadowed Regions: Seeing into the Dark

    NASA Astrophysics Data System (ADS)

    Koeber, S. D.; Robinson, M. S.

    2013-12-01

    Permanently shadowed regions (PSRs) near the lunar poles that receive secondary illumination from nearby Sun facing slopes were imaged by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NAC). Typically secondary lighting is optimal in polar areas around respective solstices and when the LRO orbit is nearly coincident with the sub-solar point (low spacecraft beta angles). NAC PSR images provide the means to search for evidence of surface frosts and unusual morphologies from ice rich regolith, and aid in planning potential landing sites for future in-situ exploration. Secondary illumination imaging in PSRs requires NAC integration times typically more than ten times greater than nominal imaging. The increased exposure time results in downtrack smear that decreases the spatial resolution of the NAC PSR images. Most long exposure NAC images of PSRs were acquired with exposure times of 24.2-ms (1-m by 40-m pixels, sampled to 20-m) and 12-ms (1-m by 20-m, sampled to 10-m). The initial campaign to acquire long exposure NAC images of PSRs in the north pole region ran from February 2013 to April 2013. Relative to the south polar region, PSRs near the north pole are generally smaller (D<24-km) and located in simple craters. Long exposure NAC images of PSRs in simple craters are often well illuminated by secondary light reflected from Sun-facing crater slopes during the northern summer solstice, allowing many PSRs to be imaged with the shorter exposure time of 12-ms (resampled to 10-m). With the exception of some craters in Peary crater, most northern PSRs with diameters >6-km were successfully imaged (ex. Whipple, Hermite A, and Rozhestvenskiy U). The third PSR south polar campaign began in April 2013 and will continue until October 2013. The third campaign will expand previous NAC coverage of PSRs and follow up on discoveries with new images of higher signal to noise ratio (SNR), higher resolution, and varying secondary illumination conditions. Utilizing previous campaign images and Sun's position, an individualized approach for targeting each crater drives this campaign. Secondary lighting within the PSRs, though somewhat diffuse, is at low incidence angles and coupled with nadir NAC imaging results in large phase angles. Such conditions tend to reduce albedo contrasts, complicating identification of patchy frost or ice deposits. Within the long exposure PSR images, a few small craters (D<200-m) with highly reflective ejecta blankets have been identified and interpreted as small fresh impact craters. Sylvester N and Main L are Copernican-age craters with PSRs; NAC images reveal debris flows, boulders, and morphologically fresh interior walls indicative of their young age. The identifications of albedo anomalies associated with these fresh craters and debris flows indicate that strong albedo contrasts (~2x) associated with small fresh impact craters can be distinguished in PSRs. Lunar highland material has an albedo of ~0.2, while pure water frost has an albedo of ~0.9. If features in PSRs have an albedo similar to lunar highlands, significant surface frost deposits could result in detectable reflective anomalies in the NAC images. However, no reflective anomalies have thus far been identified in PSRs attributable to frost.

  4. Digital image processing of metric camera imagery

    NASA Astrophysics Data System (ADS)

    Lohmann, P.

    1985-04-01

    The use of digitized Spacelab metric camera imagery for map updating is demonstrated for an area of Germany featuring agricultural and industrial areas, and a region of the White Nile. LANDSAT and Spacelab images were combined, and digital image processing techniques used for image enhancement. Updating was achieved by semiautomatic techniques, but for many applications manual editing may be feasible.

  5. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  6. Camera lens adapter magnifies image

    NASA Technical Reports Server (NTRS)

    Moffitt, F. L.

    1967-01-01

    Polaroid Land camera with an illuminated 7-power magnifier adapted to the lens, photographs weld flaws. The flaws are located by inspection with a 10-power magnifying glass and then photographed with this device, thus providing immediate pictorial data for use in remedial procedures.

  7. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  8. Development of gamma ray imaging cameras

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera's orientation, while the brightness and color'' would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project's two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R D efforts for the third year effort. 8 refs.

  9. Development of gamma ray imaging cameras

    NASA Astrophysics Data System (ADS)

    Wehe, D. K.; Knoll, G. F.

    1992-05-01

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R&D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed and indicate the geometric location of the radiation relative to the camera's orientation, while the brightness and 'color' would indicate the intensity and energy of the radiation and, hence, identify the emitting isotope. There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project's two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R&D efforts for the third year effort.

  10. Occluded object imaging via optimal camera selection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Tong, Xiaomin; Ma, Wenguang; Yu, Rui

    2013-12-01

    High performance occluded object imaging in cluttered scenes is a significant challenging task for many computer vision applications. Recently the camera array synthetic aperture imaging is proved to be an effective way to seeing object through occlusion. However, the imaging quality of occluded object is often significantly decreased by the shadows of the foreground occluder. Although some works have been presented to label the foreground occluder via object segmentation or 3D reconstruction, these methods will fail in the case of complicated occluder and severe occlusion. In this paper, we present a novel optimal camera selection algorithm to solve the above problem. The main characteristics of this algorithm include: (1) Instead of synthetic aperture imaging, we formulate the occluded object imaging problem as an optimal camera selection and mosaicking problem. To the best of our knowledge, our proposed method is the first one for occluded object mosaicing. (2) A greedy optimization framework is presented to propagate the visibility information among various depth focus planes. (3) A multiple label energy minimization formulation is designed in each plane to select the optimal camera. The energy is estimated in the synthetic aperture image volume and integrates the multi-view intensity consistency, previous visibility property and camera view smoothness, which is minimized via Graph cuts. We compare our method with the state-of-the-art synthetic aperture imaging algorithms, and extensive experimental results with qualitative and quantitative analysis demonstrate the effectiveness and superiority of our approach.

  11. Gamma-ray imaging with Compton cameras

    NASA Astrophysics Data System (ADS)

    Phillips, Gary W.

    1995-05-01

    Compton cameras use the kinematics of Compton scattering to construct a source image without the use of collimators or masks. Compton telescopes were first built in the 1970s for astronomical observations. The first laboratory instrument was designed for medical imaging. Recently, three dimensional imaging has been demonstrated using an array of high resolution germanium detectors for characterization of mixed radioactive waste containers. Image resolution depends on the detector accuracy in both energy and position and on the image reconstruction algorithm. Advantages of Compton imaging include a wide field of view, background suppression, the ability to image high energy gamma rays (up to 10 MeV) and the ability to obtain three dimensional images from a fixed position on one side of the source without the need for tomography. The operation of Compton cameras will be reviewed along with recent results and future prospects.

  12. Images obtained with a compact gamma camera

    NASA Astrophysics Data System (ADS)

    Bird, A. J.; Ramsden, D.

    1990-12-01

    A design for a compact gamma camera based on the use of a position-sensitive photomultiplier is presented. Tests have been carried out on a prototype detector system, having a sensitive area of 25 cm 2, using both a simple pinhole aperture and a parallel collimator. Images of a thyroid phantom are presented, and after processing to reduce the artefacts introduced by the use of a pinhole aperture, the quality is compared with that obtained using a standard Anger camera.

  13. Classroom multispectral imaging using inexpensive digital cameras.

    NASA Astrophysics Data System (ADS)

    Fortes, A. D.

    2007-12-01

    The proliferation of increasingly cheap digital cameras in recent years means that it has become easier to exploit the broad wavelength sensitivity of their CCDs (360 - 1100 nm) for classroom-based teaching. With the right tools, it is possible to open children's eyes to the invisible world of UVA and near-IR radiation either side of our narrow visual band. The camera-filter combinations I describe can be used to explore the world of animal vision, looking for invisible markings on flowers, or in bird plumage, for example. In combination with a basic spectroscope (such as the Project-STAR handheld plastic spectrometer, 25), it is possible to investigate the range of human vision and camera sensitivity, and to explore the atomic and molecular absorption lines from the solar and terrestrial atmospheres. My principal use of the cameras has been to teach multispectral imaging of the kind used to determine remotely the composition of planetary surfaces. A range of camera options, from 50 circuit-board mounted CCDs up to $900 semi-pro infrared camera kits (including mobile phones along the way), and various UV-vis-IR filter options will be presented. Examples of multispectral images taken with these systems are used to illustrate the range of classroom topics that can be covered. Particular attention is given to learning about spectral reflectance curves and comparing images from Earth and Mars taken using the same filter combination that it used on the Mars Rovers.

  14. Multiple-image oscilloscope camera

    DOEpatents

    Yasillo, Nicholas J.

    1978-01-01

    An optical device for placing automatically a plurality of images at selected locations on one film comprises a stepping motor coupled to a rotating mirror and lens. A mechanical connection from the mirror controls an electronic logical system to allow rotation of the mirror to place a focused image at the desired preselected location. The device is of especial utility when used to place four images on a single film to record oscilloscope views obtained in gamma radiography.

  15. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  16. Investigation of Layered Lunar Mare Lava flows through LROC Imagery and Terrestrial Analogs

    NASA Astrophysics Data System (ADS)

    Needham, H.; Rumpf, M.; Sarah, F.

    2013-12-01

    High resolution images of the lunar surface have revealed layered deposits in the walls of impact craters and pit craters in the lunar maria, which are interpreted to be sequences of stacked lava flows. The goal of our research is to establish quantitative constraints and uncertainties on the thicknesses of individual flow units comprising the layered outcrops, in order to model the cooling history of lunar lava flows. The underlying motivation for this project is to identify locations hosting intercalated units of lava flows and paleoregoliths, which may preserve snapshots of the ancient solar wind and other extra-lunar particles, thereby providing potential sampling localities for future missions to the lunar surface. Our approach involves mapping layered outcrops using high-resolution imagery acquired by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC), with constraints on flow unit dimensions provided by Lunar Orbiter Laser Altimeter (LOLA) data. We have measured thicknesses of ~ 2 to > 20 m. However, there is considerable uncertainty in the definition of contacts between adjacent units, primarily because talus commonly obscures contacts and/or prevents lateral tracing of the flow units. In addition, flows may have thicknesses or geomorphological complexity at scales approaching the limit of resolution of the data, which hampers distinguishing one unit from another. To address these issues, we have undertaken a terrestrial analog study using World View 2 satellite imagery of layered lava sequences on Oahu, Hawaii. These data have a resolution comparable to LROC NAC images of 0.5 m. The layered lava sequences are first analyzed in ArcGIS to obtain an initial estimate of the number and thicknesses of flow units identified in the images. We next visit the outcrops in the field to perform detailed measurements of the individual units. We have discovered that the number of flow units identified in the remote sensing data is fewer compared to the field analysis, because the resolution of the data precludes identification of subtle flow contacts and the identified 'units' are in fact multiple compounded units. Other factors such as vegetation and shadows may alter the view in the satellite imagery. This means that clarity in the lunar study may also be affected by factors such as lighting angle and amount of debris overlaying the lava sequence. The compilation of field and remote sensing measurements allows us to determine the uncertainty on unit thicknesses, which can be modeled to establish the uncertainty on the calculated depths of penetration of the resulting heat pulse into the underlying regolith. This in turn provides insight into the survivability of extra-lunar particles in paleoregolith layers sandwiched between lava flows.

  17. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  18. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  19. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  20. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  1. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  2. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  3. Imaging spectrometer/camera having convex grating

    NASA Technical Reports Server (NTRS)

    Reininger, Francis M. (Inventor)

    2000-01-01

    An imaging spectrometer has fore-optics coupled to a spectral resolving system with an entrance slit extending in a first direction at an imaging location of the fore-optics for receiving the image, a convex diffraction grating for separating the image into a plurality of spectra of predetermined wavelength ranges; a spectrometer array for detecting the spectra; and at least one concave sperical mirror concentric with the diffraction grating for relaying the image from the entrance slit to the diffraction grating and from the diffraction grating to the spectrometer array. In one embodiment, the spectrometer is configured in a lateral mode in which the entrance slit and the spectrometer array are displaced laterally on opposite sides of the diffraction grating in a second direction substantially perpendicular to the first direction. In another embodiment, the spectrometer is combined with a polychromatic imaging camera array disposed adjacent said entrance slit for recording said image.

  4. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  5. Crack Detection from Moving Camera Images

    NASA Astrophysics Data System (ADS)

    Maehara, Hideaki; Takiguchi, Jun'ichi; Nishikawa, Keiichi

    We aim to develop the method that automatically detects the cracks from the images that were taken from the concrete inner surface in tunnels, etc. Assuming that the images are taken from the mobile mapping systems, we have considered the method utilizing the crack's projection model on the projection plane of moving cameras. By the prototype, we tried to process the 2mm/pixel images in order to detect the 0.3mm-wide cracks, which have registered as the real crack by the human inspectors. As the result, most of the cracks were detected.

  6. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  7. Imaging of gamma emitters using scintillation cameras

    NASA Astrophysics Data System (ADS)

    Ricard, Marcel

    2004-07-01

    Since their introduction by Hal Anger in the late 1950s, the gamma cameras have been widely used in the field of nuclear medicine. The original concept is based on the association of a large field of view scintillator optically coupled with an array of photomultiplier tubes (PMTs), in order to locate the position of interactions inside the crystal. Using a dedicated accessory, like a parallel hole collimator, to focus the field of view toward a predefined direction, it is possible to built up an image of the radioactive distribution. In terms of imaging performances, three main characteristics are commonly considered: uniformity, spatial resolution and energy resolution. Major improvements were mainly due to progress in terms of industrial process regarding analogical electronic, crystal growing or PMTs manufacturing. Today's gamma camera is highly digital, from the PMTs to the display. All the corrections are applied "on the fly" using up to date signal processing techniques. At the same time some significant progresses have been achieved in the field of collimators. Finally, two new technologies have been implemented, solid detectors like CdTe or CdZnTe, and pixellized scintillators plus photodiodes or position sensitive photomultiplier tubes. These solutions are particularly well adapted to build dedicated gamma camera for breast or intraoperative imaging.

  8. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Thermal imaging camera reporting. 743... REPORTING AND NOTIFICATION § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to...

  9. Enhancement of document images from cameras

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Dance, Christopher R.

    1998-04-01

    As digital cameras become cheaper and more powerful, driven by the consumer digital photography market, we anticipate significant value in extending their utility as a general office peripheral by adding a paper scanning capability. The main technical challenges in realizing this new scanning interface are insufficient resolution, blur and lighting variations. We have developed an efficient technique for the recovery of text from digital camera images, which simultaneously treats these three problems, unlike other local thresholding algorithms which do not cope with blur and resolution enhancement. The technique first performs deblurring by deconvolution, and then resolution enhancement by linear interpolation. We compare the performance of a threshold derived from the local mean and variance of all pixel values within a neighborhood with a threshold derived from the local mean of just those pixels with high gradient. We assess performance using OCR error scores.

  10. The Widespread Distribution of Swirls in Lunar Reconnaissance Orbiter Camera Images

    NASA Astrophysics Data System (ADS)

    Denevi, B. W.; Robinson, M. S.; Boyd, A. K.; Blewett, D. T.

    2015-10-01

    Lunar swirls, the sinuous high-and low-reflectance features that cannot be mentioned without the associated adjective "enigmatic,"are of interest because of their link to crustal magnetic anomalies [1,2]. These localized magnetic anomalies create mini-magnetospheres [3,4] and may alter the typical surface modification processes or result in altogether distinct processes that form the swirls. One hypothesis is that magnetic anomalies may provide some degree of shielding from the solar wind [1,2], which could impede space weathering due to solar wind sputtering. In this case, swirls would serve as a way to compare areas affected by typical lunar space weathering (solar wind plus micrometeoroid bombardment) to those where space weathering is dominated by micrometeoroid bombardment alone, providing a natural means to assess the relative contributions of these two processes to the alteration of fresh regolith. Alternately,magnetic anomalies may play a role in the sorting of soil grains, such that the high-reflectance portion of swirls may preferentially accumulate feldspar-rich dust [5]or soils with a lower component of nanophase iron [6].Each of these scenarios presumes a pre-existing magnetic anomaly; swirlshave also been suggested to be the result of recent cometary impacts in which the remanent magnetic field is generated by the impact event[7].Here we map the distribution of swirls using ultraviolet and visible images from the Lunar Reconnaissance Orbiter Camera(LROC) Wide Angle Camera (WAC) [8,9]. We explore the relationship of the swirls to crustal magnetic anomalies[10], and examine regions with magnetic anomalies and no swirls.

  11. Cervical SPECT Camera for Parathyroid Imaging

    SciTech Connect

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called “Parathyroidectomy”. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  12. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  13. Distortion introduced in radionuclide camera views by multiformat imagers

    SciTech Connect

    Dunn, W.L.; Brown, M.L.; Tuscan, M.

    1981-01-01

    The degree of spatial distortion of radionuclide camera images introduced by the multiformat imager is compared in six different cameras. The instruments tested were the Ohio Nuclear 110 and 410S, Searle Pho Gamma LFOV, and the General Electric 400T. We found image nonlinearity variations of 4 to 25%; this illustrates the need for industrial standards applied to imager spatial distortion performance.

  14. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  15. Speckle Camera Imaging of the Planet Pluto

    NASA Astrophysics Data System (ADS)

    Howell, Steve B.; Horch, Elliott P.; Everett, Mark E.; Ciardi, David R.

    2012-10-01

    We have obtained optical wavelength (692 nm and 880 nm) speckle imaging of the planet Pluto and its largest moon Charon. Using our DSSI speckle camera attached to the Gemini North 8 m telescope, we collected high resolution imaging with an angular resolution of ~20 mas, a value at the Gemini-N telescope diffraction limit. We have produced for this binary system the first speckle reconstructed images, from which we can measure not only the orbital separation and position angle for Charon, but also the diameters of the two bodies. Our measurements of these parameters agree, within the uncertainties, with the current best values for Pluto and Charon. The Gemini-N speckle observations of Pluto are presented to illustrate the capabilities of our instrument and the robust production of high accuracy, high spatial resolution reconstructed images. We hope our results will suggest additional applications of high resolution speckle imaging for other objects within our solar system and beyond. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  16. Camera system for multispectral imaging of documents

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Boydston, Kenneth; France, Fenella G.; Knox, Keith T.; Easton, Roger L., Jr.; Toth, Michael B.

    2009-02-01

    A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.

  17. Fast Camera Imaging of Hall Thruster Ignition

    SciTech Connect

    C.L. Ellison, Y. Raitses and N.J. Fisch

    2011-02-24

    Hall thrusters provide efficient space propulsion by electrostatic acceleration of ions. Rotating electron clouds in the thruster overcome the space charge limitations of other methods. Images of the thruster startup, taken with a fast camera, reveal a bright ionization period which settles into steady state operation over 50 μs. The cathode introduces azimuthal asymmetry, which persists for about 30 μs into the ignition. Plasma thrusters are used on satellites for repositioning, orbit correction and drag compensation. The advantage of plasma thrusters over conventional chemical thrusters is that the exhaust energies are not limited by chemical energy to about an electron volt. For xenon Hall thrusters, the ion exhaust velocity can be 15-20 km/s, compared to 5 km/s for a typical chemical thruster

  18. Update on High-Resolution Geodetically Controlled LROC Polar Mosaics

    NASA Astrophysics Data System (ADS)

    Archinal, B.; Lee, E.; Weller, L.; Richie, J.; Edmundson, K.; Laura, J.; Robinson, M.; Speyerer, E.; Boyd, A.; Bowman-Cisneros, E.; Wagner, R.; Nefian, A.

    2015-10-01

    We describe progress on high-resolution (1 m/pixel) geodetically controlled LROC mosaics of the lunar poles, which can be used for locating illumination resources (for solar power or cold traps) or landing site and surface operations planning.

  19. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    PubMed Central

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  20. Spectral Camera based on Ghost Imaging via Sparsity Constraints.

    PubMed

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  1. New insight into lunar impact melt mobility from the LRO camera

    USGS Publications Warehouse

    Bray, Veronica J.; Tornabene, Livio L.; Keszthelyi, Laszlo P.; McEwen, Alfred S.; Hawke, B. Ray; Giguere, Thomas A.; Kattenhorn, Simon A.; Garry, William B.; Rizk, Bashar; Caudill, C.M.; Gaddis, Lisa R.; van der Bogert, Carolyn H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is systematically imaging impact melt deposits in and around lunar craters at meter and sub-meter scales. These images reveal that lunar impact melts, although morphologically similar to terrestrial lava flows of similar size, exhibit distinctive features (e.g., erosional channels). Although generated in a single rapid event, the post-impact mobility and morphology of lunar impact melts is surprisingly complex. We present evidence for multi-stage influx of impact melt into flow lobes and crater floor ponds. Our volume and cooling time estimates for the post-emplacement melt movements noted in LROC images suggest that new flows can emerge from melt ponds an extended time period after the impact event.

  2. Specific Analysis of Web Camera and High Resolution Planetary Imaging

    NASA Astrophysics Data System (ADS)

    Park, Youngsik; Lee, Dongju; Jin, Ho; Han, Wonyong; Park, Jang-Hyun

    2006-12-01

    Web camera is usually used for video communication between PC, it has small sensing area, cannot using long exposure application, so that is insufficient for astronomical application. But web camera is suitable for bright planet, moon, it doesn't need long exposure time. So many amateur astronomer using web camera for planetary imaging. We used ToUcam manufactured by Phillips for planetary imaging and Registax commercial program for a video file combining. And then, we are measure a property of web camera, such as linearity, gain that is usually using for analysis of CCD performance. Because of using combine technic selected high quality image from video frame, this method can take higher resolution planetary imaging than one shot image by film, digital camera and CCD. We describe a planetary observing method and a video frame combine method.

  3. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  4. Measurement of the nonuniformity of first responder thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Lock, Andrew; Amon, Francine

    2008-04-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating the nonuniformity of thermal imaging cameras. Several commercially available uncooled focal plane array cameras were examined. Because of proprietary property issues, each camera was considered a 'black box'. In these experiments, an extended area black body (18 cm square) was placed very close to the objective lens of the thermal imaging camera. The resultant video output from the camera was digitized at a resolution of 640x480 pixels and a grayscale depth of 10 bits. The nonuniformity was calculated using the standard deviation of the digitized image pixel intensities divided by the mean of those pixel intensities. This procedure was repeated for each camera at several blackbody temperatures in the range from 30° C to 260° C. It has observed that the nonuniformity initially increases with temperature, then asymptotically approaches a maximum value. Nonuniformity is also applied to the calculation of Spatial Frequency Response as well providing a noise floor. The testing procedures described herein are being developed as part of a suite of tests to be incorporated into a performance standard covering thermal imaging cameras for first responders.

  5. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  6. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  7. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. PMID:27060542

  8. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  9. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  10. Thermal analysis of the ultraviolet imager camera and electronics

    NASA Technical Reports Server (NTRS)

    Dirks, Gregory J.

    1991-01-01

    The Ultraviolet Imaging experiment has undergone design changes that necessiate updating the reduced thermal models (RTM's) for both the Camera and Electronics. In addition, there are several mission scenarios that need to be evaluated in terms of thermal response of the instruments. The impact of these design changes and mission scenarios on the thermal performance of the Camera and Electronics assemblies is discussed.

  11. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  12. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon

  13. Mobile phone camera benchmarking: combination of camera speed and image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  14. Image Quality and Performance of the LSST Camera

    NASA Astrophysics Data System (ADS)

    Gilmore, D. Kirk; Kahn, S.; Rassmussen, A.; Singel, J.

    2012-01-01

    The LSST camera, which will be the largest digital camera built to date, presents a number of novel challenges. The field of view will be 3.5 degrees in diameter and will be sampled by a 3.2 billion pixel array of sensors to be read-out in under 2 seconds, which leads to demanding constraints on the sensor architecture and read-out electronics. The camera also incorporates three large refractive lenses, an array of five wide-band large filters mounted on a carousel, and a mechanical shutter. Given the fast optical beam (f/1.2) and tight tolerances for image quality and throughput specifications, the requirements on the optical design, assembly and alignment, and contamination control of the optical elements and focal plane are crucial. We present an overview of the LSST camera, with an emphasis on models of camera image quality and throughput performance that are characterized by various analysis packages and design considerations.

  15. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  16. Reading Watermarks from Printed Binary Images with a Camera Phone

    NASA Astrophysics Data System (ADS)

    Pramila, Anu; Keskinarkaus, Anja; Seppänen, Tapio

    In this paper, we propose a method for reading a watermark from a printed binary image with a camera phone. The watermark is a small binary image which is protected with (15, 11) Hamming error coding and embedded in the binary image by utilizing flippability scores of the pixels and block based relationships. The binary image is divided into blocks and fixed number of bits is embedded in each block. A frame is added around the image in order to overcome 3D distortions and lens distortions are corrected by calibrating the camera. The results obtained are encouraging and when the images were captured freehandedly by rotating the camera approximately -2 - 2 degrees, the amount of fully recovered watermarks was 96.3%.

  17. Application of the CCD camera in medical imaging

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Kom; Smith, Chuck; Bunting, Ralph; Knoll, Paul; Wobig, Randy; Thacker, Rod

    1999-04-01

    Medical fluoroscopy is a set of radiological procedures used in medical imaging for functional and dynamic studies of digestive system. Major components in the imaging chain include image intensifier that converts x-ray information into an intensity pattern on its output screen and a CCTV camera that converts the output screen intensity pattern into video information to be displayed on a TV monitor. To properly respond to such a wide dynamic range on a real-time basis, such as fluoroscopy procedure, are very challenging. Also, similar to all other medical imaging studies, detail resolution is of great importance. Without proper contrast, spatial resolution is compromised. The many inherent advantages of CCD make it a suitable choice for dynamic studies. Recently, CCD camera are introduced as the camera of choice for medical fluoroscopy imaging system. The objective of our project was to investigate a newly installed CCD fluoroscopy system in areas of contrast resolution, details, and radiation dose.

  18. Autofluorescence imaging of basal cell carcinoma by smartphone RGB camera.

    PubMed

    Lihachev, Alexey; Derjabo, Alexander; Ferulova, Inesa; Lange, Marta; Lihacova, Ilze; Spigulis, Janis

    2015-12-01

    The feasibility of smartphones for in vivo skin autofluorescence imaging has been investigated. Filtered autofluorescence images from the same tissue area were periodically captured by a smartphone RGB camera with subsequent detection of fluorescence intensity decreasing at each image pixel for further imaging the planar distribution of those values. The proposed methodology was tested clinically with 13 basal cell carcinoma and 1 atypical nevus. Several clinical cases and potential future applications of the smartphone-based technique are discussed. PMID:26662298

  19. Autofluorescence imaging of basal cell carcinoma by smartphone RGB camera

    NASA Astrophysics Data System (ADS)

    Lihachev, Alexey; Derjabo, Alexander; Ferulova, Inesa; Lange, Marta; Lihacova, Ilze; Spigulis, Janis

    2015-12-01

    The feasibility of smartphones for in vivo skin autofluorescence imaging has been investigated. Filtered autofluorescence images from the same tissue area were periodically captured by a smartphone RGB camera with subsequent detection of fluorescence intensity decreasing at each image pixel for further imaging the planar distribution of those values. The proposed methodology was tested clinically with 13 basal cell carcinoma and 1 atypical nevus. Several clinical cases and potential future applications of the smartphone-based technique are discussed.

  20. Fluorescence lifetime imaging microscopy using a streak camera

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Li, Yahui; Sun, Luogeng; Li, Heng; Peng, Xiao; Qu, Junle

    2014-02-01

    We present the development of a fluorescence lifetime imaging microscopy system using a streak camera (SC-FLIM), which uses ultrafast infrared laser for multiphoton excitation and a streak camera for lifetime measurement. A pair of galvo mirrors are employed to accomplish quick time-resolved scanning on a line and 2D fluorescence lifetime imaging. The SC-FLIM system was calibrated using an F-P etalon and several standard fluorescent dyes, and was also used to perform fluorescence lifetime imaging of fluorescent microspheres and a prepared plant stem slide.

  1. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  2. ProxiScan?: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2010-01-08

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  3. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    SciTech Connect

    Ralph James

    2009-10-27

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  4. Acquisition and evaluation of radiography images by digital camera.

    PubMed

    Cone, Stephen W; Carucci, Laura R; Yu, Jinxing; Rafiq, Azhar; Doarn, Charles R; Merrell, Ronald C

    2005-04-01

    To determine applicability of low-cost digital imaging for different radiographic modalities used in consultations from remote areas of the Ecuadorian rainforest with limited resources, both medical and financial. Low-cost digital imaging, consisting of hand-held digital cameras, was used for image capture at a remote location. Diagnostic radiographic images were captured in Ecuador by digital camera and transmitted to a password-protected File Transfer Protocol (FTP) server at VCU Medical Center in Richmond, Virginia, using standard Internet connectivity with standard security. After capture and subsequent transfer of images via low-bandwidth Internet connections, attending radiologists in the United States compared diagnoses to those from Ecuador to evaluate quality of image transfer. Corroborative diagnoses were obtained with the digital camera images for greater than 90% of the plain film and computed tomography studies. Ultrasound (U/S) studies demonstrated only 56% corroboration. Images of radiographs captured utilizing commercially available digital cameras can provide quality sufficient for expert consultation for many plain film studies for remote, underserved areas without access to advanced modalities. PMID:15857253

  5. An airborne four-camera imaging system for agricultural applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  6. Hyperspectral imaging camera using wavefront division interference.

    PubMed

    Bahalul, Eran; Bronfeld, Asaf; Epshtein, Shlomi; Saban, Yoram; Karsenty, Avi; Arieli, Yoel

    2016-03-01

    An approach for performing hyperspectral imaging is introduced. The hyperspectral imaging is based on Fourier transform spectroscopy, where the interference is performed by wavefront division interference rather than amplitude division interference. A variable phase delay between two parts of the wavefront emanating from each point of an object is created by a spatial light modulator (SLM) to obtain variable interference patterns. The SLM is placed in the exit pupil of an imaging system, thus enabling conversion of a general imaging optical system into an imaging hyperspectral optical system. The physical basis of the new approach is introduced, and an optical apparatus is built. PMID:26974085

  7. The algorithm for generation of panoramic images for omnidirectional cameras

    NASA Astrophysics Data System (ADS)

    Lazarenko, Vasiliy P.; Yarishev, Sergey; Korotaev, Valeriy

    2015-05-01

    The omnidirectional cameras are used in areas where large field-of-view is important. Omnidirectional cameras can give a complete view of 360° along one of direction. But the distortion of omnidirectional cameras is great, which makes omnidirectional image unreadable. One way to view omnidirectional images in a readable form is the generation of panoramic images from omnidirectional images. At the same time panorama keeps the main advantage of the omnidirectional image - a large field of view. The algorithm for generation panoramas from omnidirectional images consists of several steps. Panoramas can be described as projections onto cylinders, spheres, cubes, or other surfaces that surround a viewing point. In practice, the most commonly used cylindrical, spherical and cubic panoramas. So at the first step we describe panoramas field-of-view by creating virtual surface (cylinder, sphere or cube) from matrix of 3d points in virtual object space. Then we create mapping table by finding coordinates of image points for those 3d points on omnidirectional image by using projection function. At the last step we generate panorama pixel-by-pixel image from original omnidirectional image by using of mapping table. In order to find the projection function of omnidirectional camera we used the calibration procedure, developed by Davide Scaramuzza - Omnidirectional Camera Calibration Toolbox for Matlab. After the calibration, the toolbox provides two functions which express the relation between a given pixel point and its projection onto the unit sphere. After first run of the algorithm we obtain mapping table. This mapping table can be used for real time generation of panoramic images with minimal cost of CPU time.

  8. The European Photon Imaging Camera on XMM-Newton: The pn-CCD camera

    NASA Astrophysics Data System (ADS)

    Strüder, L.; Briel, U.; Dennerl, K.; Hartmann, R.; Kendziorra, E.; Meidinger, N.; Pfeffermann, E.; Reppin, C.; Aschenbach, B.; Bornemann, W.; Bräuninger, H.; Burkert, W.; Elender, M.; Freyberg, M.; Haberl, F.; Hartner, G.; Heuschmann, F.; Hippmann, H.; Kastelic, E.; Kemmer, S.; Kettenring, G.; Kink, W.; Krause, N.; Müller, S.; Oppitz, A.; Pietsch, W.; Popp, M.; Predehl, P.; Read, A.; Stephan, K. H.; Stötter, D.; Trümper, J.; Holl, P.; Kemmer, J.; Soltau, H.; Stötter, R.; Weber, U.; Weichert, U.; von Zanthier, C.; Carathanassis, D.; Lutz, G.; Richter, R. H.; Solc, P.; Böttcher, H.; Kuster, M.; Staubert, R.; Abbey, A.; Holland, A.; Turner, M.; Balasini, M.; Bignami, G. F.; La Palombara, N.; Villa, G.; Buttler, W.; Gianini, F.; Lainé, R.; Lumb, D.; Dhez, P.

    2001-01-01

    The European Photon Imaging Camera (EPIC) consortium has provided the focal plane instruments for the three X-ray mirror systems on XMM-Newton. Two cameras with a reflecting grating spectrometer in the optical path are equipped with MOS type CCDs as focal plane detectors (Turner \\cite{mturner}), the telescope with the full photon flux operates the novel pn-CCD as an imaging X-ray spectrometer. The pn-CCD camera system was developed under the leadership of the Max-Planck-Institut für extraterrestrische Physik (MPE), Garching. The concept of the pn-CCD is described as well as the different operational modes of the camera system. The electrical, mechanical and thermal design of the focal plane and camera is briefly treated. The in-orbit performance is described in terms of energy resolution, quantum efficiency, time resolution, long term stability and charged particle background. Special emphasis is given to the radiation hardening of the devices and the measured and expected degradation due to radiation damage of ionizing particles in the first 9 months of in orbit operation. Based on observations with XMM-Newton, an ESA Science Mission with instruments and contributions directly funded by ESA Member States and the USA (NASA).

  9. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  10. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  11. Efficient height measurement method of surveillance camera image.

    PubMed

    Lee, Joong; Lee, Eung-Dae; Tark, Hyun-Oh; Hwang, Jin-Woo; Yoon, Do-Young

    2008-05-01

    As surveillance cameras are increasingly installed, their films are often submitted as evidence of crime, but very scant detailed information such as features and clothes is obtained due to the limited camera performance. Height, however, is relatively not significantly influenced by the camera performance. This paper studied the height measurement method using images from a CCTV. The information on the height was obtained via photogrammetry, including the reference points in the photographed area and the calculation of the relationship between a 3D space and a 2D image through linear and nonlinear calibration. Using this correlation, this paper suggested the height measurement method, which projects a 3D virtual ruler onto the image. This method has been proven to offer more stable values within the range of data convergence than those of other existing methods. PMID:18096339

  12. Snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS).

    PubMed

    Gao, Liang; Smith, R Theodore; Tkaczyk, Tomasz S

    2012-01-01

    We present a snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS) for eye imaging applications. The resulting system is capable of simultaneously acquiring 48 spectral channel images in the range 470 nm-650 nm with frame rate at 5.2 fps. The spatial sampling of each measured spectral scene is 350 × 350 pixels. The advantages of this snapshot device are elimination of the eye motion artifacts and pixel misregistration problems in traditional scanning-based hyperspectral retinal cameras, and real-time imaging of oxygen saturation dynamics with sub-second temporal resolution. The spectral imaging performance is demonstrated in a human retinal imaging experiment in vivo. The absorption spectral signatures of oxy-hemoglobin and macular pigments were successfully acquired by using this device. PMID:22254167

  13. Wide-Angle, Reflective Strip-Imaging Camera

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H.

    1992-01-01

    Proposed camera images thin, striplike portion of field of view of 180 degrees wide. Hemispherical concave reflector forms image onto optical fibers, which transfers it to strip of photodetectors or spectrograph. Advantages include little geometric distortion, achromatism, and ease of athermalization. Uses include surveillance of clouds, coarse mapping of terrain, measurements of bidirectional reflectance distribution functions of aerosols, imaging spectrometry, oceanography, and exploration of planets.

  14. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  15. High-Resolution Mars Camera Test Image of Moon (Infrared)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

  16. Investigation of an SFOV hybrid gamma camera for thyroid imaging.

    PubMed

    Bugby, S L; Lees, J E; Ng, A H; Alqahtani, M S; Perkins, A C

    2016-01-01

    The Hybrid Compact Gamma Camera (HCGC) is a small field of view (SFOV) portable hybrid gamma-optical camera intended for small organ imaging at the patient bedside. In this study, a thyroid phantom was used to determine the suitability of the HCGC for clinical thyroid imaging through comparison with large field of view (LFOV) system performance. A direct comparison with LFOV contrast performance showed that the lower sensitivity of the HCGC had a detrimental effect on image quality. Despite this, the contrast of HCGC images exceeded those of the LFOV cameras for some image features particularly when a high-resolution pinhole collimator was used. A clinical simulation showed that thyroid morphology was visible in a 5 min integrated image acquisition with an expected dependency on the activity within the thyroid. The first clinical use of the HCGC for imaging thyroid uptake of (123)I is also presented. Measurements indicate that the HCGC has promising utility in thyroid imaging, particularly as its small size allows it to be brought into closer proximity with a patient. Future development of the energy response of the HCGC is expected to further improve image detectability. PMID:26778578

  17. Model observers to predict human performance in LROC studies of SPECT reconstruction using anatomical priors

    NASA Astrophysics Data System (ADS)

    Lehovich, Andre; Gifford, Howard C.; King, Michael A.

    2008-03-01

    We investigate the use of linear model observers to predict human performance in a localization ROC (LROC) study. The task is to locate gallium-avid tumors in simulated SPECT images of a digital phantom. Our study is intended to find the optimal strength of smoothing priors incorporating various degrees of anatomical knowledge. Although humans reading the images must perform a search task, our models ignore search by assuming the lesion location is known. We use area under the model ROC curve to predict human area under the LROC curve. We used three models, the non-prewhitening matched filter (NPWMF), the channelized nonprewhitening (CNPW), and the channelized Hotelling observer (CHO). All models have access to noise-free reconstructions, which are used to compute the signal template. The NPWMF model does a poor job of predicting human performance. The CNPW and CHO model do a somewhat better job, but still do not qualitatively capture the human results. None of the models accurately predicts the smoothing strength which maximizes human performance.

  18. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 ?m] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  19. Air Pollution Determination Using a Surveillance Internet Protocol Camera Images

    NASA Astrophysics Data System (ADS)

    Chow Jeng, C. J.; Hwee San, Hslim; Matjafri, M. Z.; Abdullah, Abdul, K.

    Air pollution has long been a problem in the industrial nations of the West It has now become an increasing source of environmental degradation in the developing nations of east Asia Malaysia government has built a network to monitor air pollution But the cost of these networks is high and limits the knowledge of pollutant concentration to specific points of the cities A methodology based on a surveillance internet protocol IP camera for the determination air pollution concentrations was presented in this study The objective of this study was to test the feasibility of using IP camera data for estimating real time particulate matter of size less than 10 micron PM10 in the campus of USM The proposed PM10 retrieval algorithm derived from the atmospheric optical properties was employed in the present study In situ data sets of PM10 measurements and sun radiation measurements at the ground surface were collected simultaneously with the IP camera images using a DustTrak meter and a handheld spectroradiometer respectively The digital images were separated into three bands namely red green and blue bands for multispectral algorithm calibration The digital number DN of the IP camera images were converted into radiance and reflectance values After that the reflectance recorded by the digital camera was subtracted by the reflectance of the known surface and we obtained the reflectance caused by the atmospheric components The atmospheric reflectance values were used for regression analysis Regression technique was employed to determine suitable

  20. Visible camera imaging of plasmas in Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  1. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  2. Digital Camera Identification from Images - Estimating False Acceptance Probability

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav

    Photo-response non-uniformity noise present in output signals of CCD and CMOS sensors has been used as fingerprint to uniquely identify the source digital camera that took the image. The same fingerprint can establish a link between images according to their common source. In this paper, we review the state-of-the-art identification method and discuss its practical issues. In the camera identification task, when formulated as a binary hypothesis test, a decision threshold is set on correlation between image noise and modulated fingerprint. The threshold determines the probability of two kinds of possible errors: false acceptance and missed detection. We will focus on estimation of the false acceptance probability that we wish to keep very low. A straightforward approach involves testing a large number of different camera fingerprints against one image or one camera fingerprint against many images from different sources. Such sampling of the correlation probability distribution is time consuming and expensive while extrapolation of the tails of the distribution is still not reliable. A novel approach is based on cross-correlation analysis and peak-to-correlation-energy ratio.

  3. Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy

    NASA Astrophysics Data System (ADS)

    Hewat, A. W.

    We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel "Kodak" KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.

  4. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  5. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  6. Dual camera system for acquisition of high resolution images

    NASA Astrophysics Data System (ADS)

    Papon, Jeremie A.; Broussard, Randy P.; Ives, Robert W.

    2007-02-01

    Video surveillance is ubiquitous in modern society, but surveillance cameras are severely limited in utility by their low resolution. With this in mind, we have developed a system that can autonomously take high resolution still frame images of moving objects. In order to do this, we combine a low resolution video camera and a high resolution still frame camera mounted on a pan/tilt mount. In order to determine what should be photographed (objects of interest), we employ a hierarchical method which first separates foreground from background using a temporal-based median filtering technique. We then use a feed-forward neural network classifier on the foreground regions to determine whether the regions contain the objects of interest. This is done over several frames, and a motion vector is deduced for the object. The pan/tilt mount then focuses the high resolution camera on the next predicted location of the object, and an image is acquired. All components are controlled through a single MATLAB graphical user interface (GUI). The final system we present will be able to detect multiple moving objects simultaneously, track them, and acquire high resolution images of them. Results will demonstrate performance tracking and imaging varying numbers of objects moving at different speeds.

  7. A compact gamma camera for biological imaging

    SciTech Connect

    Bradley, E L; Cella, J; Majewski, S; Popov, V; Qian, Jianguo; Saha, M S; Smith, M F; Weisenberger, A G; Welsh, R E

    2006-02-01

    A compact detector, sized particularly for imaging a mouse, is described. The active area of the detector is approximately 46 mm; spl times/ 96 mm. Two flat-panel Hamamatsu H8500 position-sensitive photomultiplier tubes (PSPMTs) are coupled to a pixellated NaI(Tl) scintillator which views the animal through a copper-beryllium (CuBe) parallel-hole collimator specially designed for {sup 125}I. Although the PSPMTs have insensitive areas at their edges and there is a physical gap, corrections for scintillation light collection at the junction between the two tubes results in a uniform response across the entire rectangular area of the detector. The system described has been developed to optimize both sensitivity and resolution for in-vivo imaging of small animals injected with iodinated compounds. We demonstrate an in-vivo application of this detector, particularly to SPECT, by imaging mice injected with approximately 10-15; spl mu/Ci of {sup 125}I.

  8. Image-based camera motion estimation using prior probabilities

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Park, Sun Young; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision. Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features, and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these techniques are compounded when they are applied to medical video captured by an endoscope, which presents further challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing endoscope motion sequences in colonoscopy. Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure, forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC to help reject improbable motion parameters caused by outliers and other problems with medical data. This paper presents the theoretical basis of our method along with preliminary results on indoor scenes and synthetic colon images.

  9. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  10. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  11. Coincidence ion imaging with a fast frame camera

    SciTech Connect

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  12. Spatial calibration of full stokes polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2014-05-01

    Objective and background: We present a new method for the calibration of Bossa Nova Technologies' full Stokes, passive polarization imaging camera SALSA. The SALSA camera is a Division of Time Imaging Polarimeter. It uses custom made Ferroelectric Liquid Crystals mounted directly in front of the camera's CCD. Regular calibration process based on Data Reduction Matrix calculation assumes a perfect spatial uniformity of the FLC. However, alignment of FLC molecules can be disturbed by external constraints like mechanical stress from fixture, temperature variations and humidity. This disarray of the FLC molecules alignment appears as spatial non-uniformity. With typical DRM condition numbers of 2 to 5, the influence on DOLP and DOCP variations over the field of view can get up to 10%. Spatial nonuniformity of commercially available FLC products is the limiting factor for achieving reliable performances over the whole camera's field of view. We developed a field calibration technique based on mapping the CCD into areas of interest, then applying the DRM calculations on those individual areas. Results: First, we provide general background of the SALSA camera's technology, its performances and limitations. Detailed analysis of commercially available FLCs is described. Particularly, the spatial non uniformity influence on the Stokes parameters. Then, the new calibration technique is presented. Several configurations and parameters are tested: even division of the CCD into square-shaped regions, the number of regions, adaptive regions. Finally, the spatial DRM "stitching" process is described, especially for live calculation and display of Stokes parameters.

  13. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  14. Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera.

    PubMed

    Shaw, Joseph; Nugent, Paul; Pust, Nathan; Thurairajah, Brentha; Mizutani, Kohei

    2005-07-25

    An uncooled microbolometer-array thermal infrared camera has been incorporated into a remote sensing system for radiometric sky imaging. The radiometric calibration is validated and improved through direct comparison with spectrally integrated data from the Atmospheric Emitted Radiance Interferometer (AERI). With the improved calibration, the Infrared Cloud Imager (ICI) system routinely obtains sky images with radiometric uncertainty less than 0.5 W/(m(2 )sr) for extended deployments in challenging field environments. We demonstrate the infrared cloud imaging technique with still and time-lapse imagery of clear and cloudy skies, including stratus, cirrus, and wave clouds. PMID:19498585

  15. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  16. Sloan Digital Sky Survey imaging camera: design and performance

    NASA Astrophysics Data System (ADS)

    Rockosi, Constance M.; Gunn, James E.; Carr, Michael A.; Sekiguchi, Masaki; Ivezic, Zeljko; Munn, Jeffrey A.

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) imaging camera saw first light in May 1998, and has been in regular operation since the start of the survey in April 2000. We review here key elements in the design of the instrument driven by the specific goals of the survey, and discuss some of the operational issues involved in keeping the instrument ready to observe at all times and in monitoring its performance. We present data on the mechanical and photometric stability of the camera, using on-sky survey data as collected and processed to date.

  17. Copernican craters: Early results from the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    McEwen, A. S.; Hiesinger, H.; Thomas, P. C.; Robinson, M. S.; van der Bogert, C.; Ostrach, L.; Plescia, J. B.; Bray, V. J.; Tornabene, L. L.

    2009-12-01

    The youngest (Copernican) craters on the Moon provide the best examples of original crater morphology and a record of the impact flux over the last ~1 Ga in the Earth-Moon system. The LRO Narrow Angle Cameras (NAC) provide 50 cm pixels from an altitude of 50 km. With changing incidence angle, global access, and very high data rates, these cameras provide unprecedented data on lunar craters. Stereo image pairs are being acquired for detailed topographic mapping. These data allow comparisons of relative ages of the larger young craters, some of which are tied to absolute radiometric ages from Apollo-returned samples. These relative ages, the crater populations at small diameters, and details of crater morphology including ejecta and melt morphologies, allow better delineation of recent lunar history and the formation and modification of impact craters. Crater counts may also reveal differences in the formation and preservation of small diameter craters as a function of target material (e.g., unconsolidated regolith versus solid impact melt). One key question: Is the current cratering rate constant or does it fluctuate. We will constrain the very recent cratering rate (at 10-100 m diameter) by comparing LROC images with those taken by Apollo nearly 40 years ago to determine the number of new impact craters. The current cratering rate and an assumption of constant cratering rate over time may or may not correctly predict the number of craters superimposed over radiometrically-dated surfaces such as South Ray, Cone, and North Ray craters, which range from 2-50 Ma and are not saturated by 10-100 m craters. If the prediction fails with realistic consideration of errors, then the present-day cratering rate must be atypical. Secondary craters complicate this analysis, but the resolution and coverage of LROC enables improved recognition of secondary craters. Of particular interest for the youngest Copernican craters is the possibility of self-cratering. LROC is providing the the image quality needed to classify small craters by state of degradation (i.e., relative age); concentrations of craters with uniform size and age indicate secondary formation. Portion of LROC image M103703826LE showing a sparsely-cratered pond of impact melt on the floor of farside Copernican crater Necho (4.95 S, 123.6 E).

  18. A novel SPECT camera for molecular imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir

    2011-10-01

    The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.

  19. Measuring SO2 ship emissions with an ultraviolet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2014-05-01

    Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.

  20. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  1. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  2. New high spatial resolution portable camera in medical imaging

    NASA Astrophysics Data System (ADS)

    Trotta, C.; Massari, R.; Palermo, N.; Scopinaro, F.; Soluri, A.

    2007-07-01

    In the last years, many studies have been carried out on portable gamma cameras in order to optimize a device for medical imaging. In this paper, we present a new type of gamma camera, for low energies detection, based on a position sensitive photomultiplier tube Hamamatsu Flat Panel H8500 and an innovative technique based on CsI(Tl) scintillation crystals inserted into the square holes of a tungsten collimator. The geometrical features of this collimator-scintillator structure, which affect the camera spatial resolution and sensitivity, were chosen to offer optimal performances in clinical functional examinations. Detector sensitivity, energy resolution and spatial resolution were measured and the acquired image quality was evaluated with particular attention to the pixel identification capability. This low weight (about 2 kg) portable gamma camera was developed thanks to a miniaturized resistive chain electronic readout, combined with a dedicated compact 4 channel ADC board. This data acquisition board, designed by our research group, showed excellent performances, with respect to a commercial PCI 6110E card (National Intruments), in term of sampling period and additional on board operation for data pre-processing.

  3. Camera assembly design proposal for SRF cavity image collection

    SciTech Connect

    Tuozzolo, S.

    2011-10-10

    This project seeks to collect images from the inside of a superconducting radio frequency (SRF) large grain niobium cavity during vertical testing. These images will provide information on multipacting and other phenomena occurring in the SRF cavity during these tests. Multipacting, a process that involves an electron buildup in the cavity and concurrent loss of RF power, is thought to be occurring near the cathode in the SRF structure. Images of electron emission in the structure will help diagnose the source of multipacting in the cavity. Multipacting sources may be eliminated with an alteration of geometric or resonant conditions in the SRF structure. Other phenomena, including unexplained light emissions previously discovered at SLAC, may be present in the cavity. In order to effectively capture images of these events during testing, a camera assembly needs to be installed to the bottom of the RF structure. The SRF assembly operates under extreme environmental conditions: it is kept in a dewar in a bath of 2K liquid helium during these tests, is pumped down to ultra-high vacuum, and is subjected to RF voltages. Because of this, the camera needs to exist as a separate assembly attached to the bottom of the cavity. The design of the camera is constrained by a number of factors that are discussed.

  4. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  5. Scalar wave-optical reconstruction of plenoptic camera images.

    PubMed

    Junker, André; Stenau, Tim; Brenner, Karl-Heinz

    2014-09-01

    We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects. PMID:25321378

  6. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  7. Performance characterization of an image converter based streak camera

    SciTech Connect

    Lai, C.C.; Olk, L.B.

    1985-08-20

    The performance response of an electronic subnanosecond streak camera to a spatially distributed optical signal varies significantly with the image location on output screen. The variations are due mainly to the combined effects of (1) electron-optics aberrations, (2) camera sweep ramps and gating waveform imperfections, (3) photocathode and phosphor quantum efficiency nonuniformities, and (4) excessive incident intensity or power. Consequently, a dynamic full-scale characterization of the streak camera is necessary for achieving a better measurement accuracy, relative or absolute. To meet this need, we are developing a simple yet versatile technique for characterizing the large-format image-converter-tube based streak cameras that are routinely used as the prime diagnostic instruments at the nuclear test site in Nevada. A mode-locked pulsed dye laser routing through beam splitters and mirrors provides the repetitive light source of multiple pulses with known intensities and inter-pulse timing for illumination along the streak sweep axis. Meanwhile, a bar-chart test pattern at the input slit intercepting an expanded form of the light source, or a bundle of equal-length fibers fanning out into a linear line array replacing the slit, distributes the illumination along the axis perpendicular to the sweep. In one single shot, this technique enables an accurate and detailed mapping of the key performance parametes of a large-format streak camera. The obtainable parameters include quantitative temporal and spatial resolutions, descriptive dynamic range, two-dimensional sweep nonlinearity, and intensity or power dependent distortions. Experimental setup is described. Sample test data, digitization plots, and computer analysis results are presented.

  8. Camera system resolution and its influence on digital image correlation

    DOE PAGESBeta

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  9. Camera system resolution and its influence on digital image correlation

    SciTech Connect

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss of spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.

  10. First Halley multicolour camera imaging results from Giotto

    NASA Technical Reports Server (NTRS)

    Keller, H. U.; Arpigny, C.; Barbieri, C.; Bonnet, R. M.; Cazes, S.

    1986-01-01

    The Giotto spacecraft's Halley Multicolor Camera imaging results have furnished flyby images that are centered on the brightest part of the inner coma; these show the silouette of a large, solid and irregularly shaped cometary nucleus and jetlike dust activity. The preliminary assessment of these data has yielded information on the dimensions and shape of the nucleus and dust emission activity. It is noted that only minor parts of the surface are active, with most of the surface being covered by a nonvolatile material. Dust jets dominate the inner coma, and are restricted to a subsolar hemisphere.

  11. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  12. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the lp-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  13. An efficient image compressor for charge coupled devices camera.

    PubMed

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p -norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  14. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.

  15. LROC WAC 100 Meter Scale Photometrically Normalized Map of the Moon

    NASA Astrophysics Data System (ADS)

    Boyd, A. K.; Nuno, R. G.; Robinson, M. S.; Denevi, B. W.; Hapke, B. W.

    2013-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) monthly global observations allowed derivation of a robust empirical photometric solution over a broad range of incidence, emission and phase (i, e, g) angles. Combining the WAC stereo-based GLD100 [1] digital terrain model (DTM) and LOLA polar DTMs [2] enabled precise topographic corrections to photometric angles. Over 100,000 WAC observations at 643 nm were calibrated to reflectance (I/F). Photometric angles (i, e, g), latitude, and longitude were calculated and stored for each WAC pixel. The 6-dimensional data set was then reduced to 3 dimensions by photometrically normalizing I/F with a global solution similar to [3]. The global solution was calculated from three 2°x2° tiles centered on (1°N, 147°E), (45°N, 147°E), and (89°N, 147°E), and included over 40 million WAC pixels. A least squares fit to a multivariate polynomial of degree 4 (f(i,e,g)) was performed, and the result was the starting point for a minimum search solving the non-linear function min[{1-[ I/F / f(i,e,g)] }2]. The input pixels were filtered to incidence angles (calculated from topography) < 89° and I/F greater than a minimum threshold to avoid shadowed pixels, and the output normalized I/F values were gridded into an equal-area map projection at 100 meters/pixel. At each grid location the median, standard deviation, and count of valid pixels were recorded. The normalized reflectance map is the result of the median of all normalized WAC pixels overlapping that specific 100-m grid cell. There are an average of 86 WAC normalized I/F estimates at each cell [3]. The resulting photometrically normalized mosaic provides the means to accurately compare I/F values for different regions on the Moon (see Nuno et al. [4]). The subtle differences in normalized I/F can now be traced across the local topography at regions that are illuminated at any point during the LRO mission (while the WAC was imaging), including at polar latitudes. This continuous map of reflectance at 643 nm, normalized to a standard geometry of i=30, e=0, g=30, ranges from 0.036 to 0.36 (0.01%-99.99% of the histogram) with a global mean reflectance of 0.115. Immature rays of Copernican craters are typically >0.14 and maria are typically <0.07 with averages for individual maria ranging from 0.046 to 0.060. The materials with the lowest normalized reflectance on the Moon are pyroclastic deposits at Sinus Aestuum (<0.036) and those with the highest normalized reflectance are found on steep crater walls (>0.36)[4]. 1. Scholten et al. (2012) J. Geophys. Res., 117, doi: 10.1029/2011JE003926. 2. Smith et al. (2010), Geophys. Res. Lett., 37, L18204, doi:10.1029/2010GL043751. 3. Boyd et al. (2012) LPSC XLIII, #2795 4. Nuno et al. AGU, (this conference)

  16. The role of camera-bundled image management software in the consumer digital imaging value chain

    NASA Astrophysics Data System (ADS)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  17. CMOS image sensor noise reduction method for image signal processor in digital cameras and camera phones

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, SeongDeok; Choe, Wonhee; Kim, Chang-Yong

    2007-02-01

    Digital images captured from CMOS image sensors suffer Gaussian noise and impulsive noise. To efficiently reduce the noise in Image Signal Processor (ISP), we analyze noise feature for imaging pipeline of ISP where noise reduction algorithm is performed. The Gaussian noise reduction and impulsive noise reduction method are proposed for proper ISP implementation in Bayer domain. The proposed method takes advantage of the analyzed noise feature to calculate noise reduction filter coefficients. Thus, noise is adaptively reduced according to the scene environment. Since noise is amplified and characteristic of noise varies while the image sensor signal undergoes several image processing steps, it is better to remove noise in earlier stage on imaging pipeline of ISP. Thus, noise reduction is carried out in Bayer domain on imaging pipeline of ISP. The method is tested on imaging pipeline of ISP and images captured from Samsung 2M CMOS image sensor test module. The experimental results show that the proposed method removes noise while effectively preserves edges.

  18. Imaging of Venus from Galileo: Early results and camera performance

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.; Klaasen, K.P.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Fanale, F.P.; Ingersoll, A.P.; Pollock, J.B.; Morrison, D.; Clary, M.C.; Cunningham, W.; Breneman, H.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A??, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3%, is different from that recorded at short wavelengths. In particular, the "polar collar," which is omnipresent in short wavelength images, is absent at 9900 A??. The maximum contrast in the features at 4200 A?? is about 20%. The optical performance of the camera is described and is judged to be nominal. ?? 1992.

  19. Dual-camera design for coded aperture snapshot spectral imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:25967796

  20. High-resolution light field cameras based on a hybrid imaging system

    NASA Astrophysics Data System (ADS)

    Dai, Feng; Lu, Jing; Ma, Yike; Zhang, Yongdong

    2014-11-01

    Compared to traditional digital cameras, light field (LF) cameras measure not only the intensity of rays, but also their light field information. As LF cameras trade a good deal of spatial resolution for extra angular information, they provide lower spatial resolution than traditional digital cameras. In this paper, we show a hybrid imaging system consisting of a LF camera and a high-resolution traditional digital camera, achieving both high spatial resolution and high angular resolution. We build an example prototype using a Lytro camera and a DSLR camera to generate a LF image with 10 megapixel spatial resolution and get high-resolution digital refocused images, multi-view images and all-focused images.

  1. A two-camera imaging system for pest detection and aerial application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation reports on the design and testing of an airborne two-camera imaging system for pest detection and aerial application assessment. The system consists of two digital cameras with 5616 x 3744 effective pixels. One camera captures normal color images with blue, green and red bands, whi...

  2. Single-quantum dot imaging with a photon counting camera

    PubMed Central

    Michalet, X.; Colyer, R. A.; Antelman, J.; Siegmund, O.H.W.; Tremsin, A.; Vallerga, J.V.; Weiss, S.

    2010-01-01

    The expanding spectrum of applications of single-molecule fluorescence imaging ranges from fundamental in vitro studies of biomolecular activity to tracking of receptors in live cells. The success of these assays has relied on progresses in organic and non-organic fluorescent probe developments as well as improvements in the sensitivity of light detectors. We describe a new type of detector developed with the specific goal of ultra-sensitive single-molecule imaging. It is a wide-field, photon-counting detector providing high temporal and high spatial resolution information for each incoming photon. It can be used as a standard low-light level camera, but also allows access to a lot more information, such as fluorescence lifetime and spatio-temporal correlations. We illustrate the single-molecule imaging performance of our current prototype using quantum dots and discuss on-going and future developments of this detector. PMID:19689323

  3. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  4. Volcanic plume characteristics determined using an infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Lopez, T.; Thomas, H. E.; Prata, A. J.; Amigo, A.; Fee, D.; Moriano, D.

    2015-07-01

    Measurements of volcanic emissions (ash and SO2) from small-sized eruptions at three geographically dispersed volcanoes are presented from a novel, multichannel, uncooled imaging infrared camera. Infrared instruments and cameras have been used previously at volcanoes to study lava bodies and to assess plume dynamics using high temperature sources. Here we use spectrally resolved narrowband (~ 0.5-1 μm bandwidth) imagery to retrieve SO2 and ash slant column densities (g m- 2) and emission rates or fluxes from infrared thermal imagery at close to ambient atmospheric temperatures. The relatively fast sampling (0.1-0.5 Hz) of the multispectral imagery and the fast sampling (~ 1 Hz) of single channel temperature data permit analysis of some aspects of plume dynamics. Estimations of SO2 and ash mass fluxes, and total slant column densities of SO2 and fine ash in individual small explosions from Stromboli (Italy) and Karymsky (Russia), and total SO2 slant column densities and fluxes from Láscar (Chile) volcanoes, are provided. We evaluate the temporal evolution of fine ash particle sizes in ash-rich explosions at Stromboli and Karymsky and use these observations to infer the presence of at least two distinct fine ash modes, with mean radii of < 10 μm and > 10 μm. The camera and techniques detailed here provide a tool to quickly and remotely estimate fluxes of fine ash and SO2 gas and characterize eruption size.

  5. Imaging the seizure during surgery with a hyperspectral camera.

    PubMed

    Noordmans, Herke Jan; Ferrier, Cyrille; de Roode, Rowland; Leijten, Frans; van Rijen, Peter; Gosselaar, Peter; Klaessens, John; Verdaasdonk, Ruud

    2013-11-01

    An epilepsy patient with recurring sensorimotor seizures involving the left hand every 10 min, was imaged with a hyperspectral camera during surgery. By calculating the changes in oxygenated, deoxygenated blood, and total blood volume in the cortex, a focal increase in oxygenated and total blood volume could be observed in the sensory cortex, corresponding to the seizure-onset zone defined by intracranial electroencephalography (EEG) findings. This probably reflects very local seizure activity. After multiple subpial transections in this motor area, clinical seizures abated. PMID:24199829

  6. MECHANICAL ADVANCING HANDLE THAT SIMPLIFIES MINIRHIZOTRON CAMERA REGISTRATION AND IMAGE COLLECTION

    EPA Science Inventory

    Minirkizotrons in conjunction with a minirkizotron video camera system are becoming widely used tools for investigating root production and survical in a variety of ecosystems. Image collection with a minirhizotron camera can be time consuming and tedious particularly when hundre...

  7. Noise evaluation of Compton camera imaging for proton therapy.

    PubMed

    Ortega, P G; Torres-Espallardo, I; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-03-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector. PMID:25658644

  8. Noise evaluation of Compton camera imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llosá, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.

  9. Image reconstruction methods for the PBX-M pinhole camera

    SciTech Connect

    Holland, A.; Powell, E.T.; Fonck, R.J.

    1990-03-01

    This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs.

  10. Image reconstruction methods for the PBX-M pinhole camera.

    PubMed

    Holland, A; Powell, E T; Fonck, R J

    1991-09-10

    We describe two methods that have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera [Proc. Soc. Photo-Opt. Instrum. Eng. 691, 111 (1986)]. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least-squares fit to the data. This has the advantage of being fast and small and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape that can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster for an overdetermined system than the usual Lagrange multiplier approach to finding the maximum entropy solution [J. Opt. Soc. Am. 62, 511 (1972); Rev. Sci. Instrum. 57, 1557 (1986)]. PMID:20706452

  11. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  12. Ceres Survey Atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2016-02-01

    The Dawn Framing Camera (FC) acquired almost 900 clear filter images of Ceres with a resolution of about 400 m/pixels during the seven cycles in the Survey orbit in June 2015. We ortho-rectified 42 images from the third cycle and produced a global, high-resolution, controlled mosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 3 tiles mapped at a scale of 1:2,000,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The whole atlas is available to the public through the Dawn GIS web page.

  13. Embedded image enhancement for high-throughput cameras

    NASA Astrophysics Data System (ADS)

    Geerts, Stan J. C.; Cornelissen, Dion; de With, Peter H. N.

    2014-03-01

    This paper presents image enhancement for a novel Ultra-High-Definition (UHD) video camera offering 4K images and higher. Conventional image enhancement techniques need to be reconsidered for the high-resolution images and the low-light sensitivity of the new sensor. We study two image enhancement functions and evaluate and optimize the algorithms for embedded implementation in programmable logic (FPGA). The enhancement study involves high-quality Auto White Balancing (AWB) and Local Contrast Enhancement (LCE). We have compared multiple algorithms from literature, both with objective and subjective metrics. In order to objectively compare Local Contrast (LC), an existing LC metric is modified for LC measurement in UHD images. For AWB, we have found that color histogram stretching offers a subjective high image quality and it is among the algorithms with the lowest complexity, while giving only a small balancing error. We impose a color-to-color gain constraint, which improves robustness of low-light images. For local contrast enhancement, a combination of contrast preserving gamma and single-scale Retinex is selected. A modified bilateral filter is designed to prevent halo artifacts, while significantly reducing the complexity and simultaneously preserving quality. We show that by cascading contrast preserving gamma and single-scale Retinex, the visibility of details is improved towards the level appropriate for high-quality surveillance applications. The user is offered control over the amount of enhancement. Also, we discuss the mapping of those functions on a heterogeneous platform to come to an effective implementation while preserving quality and robustness.

  14. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  15. Image deblurring using the direction dependence of camera resolution

    NASA Astrophysics Data System (ADS)

    Hirai, Yukio; Yoshikawa, Hiroyasu; Shimizu, Masayoshi

    2013-03-01

    The blurring that occurs in the lens of a camera has a tendency to further degrade in areas away from the on-axis of the image. In addition, the degradation of the blurred image in an off-axis area exhibits directional dependence. Conventional methods have been known to use the Wiener filter or the Richardson-Lucy algorithm to mitigate the problem. These methods use the pre-defined point spread function (PSF) in the restoration process, thereby preventing an increase in the noise elements. However, the nonuniform degradation that depends on the direction is not improved even though the edges are emphasized by these conventional methods. In this paper, we analyze the directional dependence of resolution based on the modeling of an optical system using a blurred image. We propose a novel image deblurring method that employs a reverse filter based on optimizing the directional dependence coefficients of the regularization term in the maximum a posterior probability (MAP) algorithm. We have improved the directional dependence of resolution by optimizing the weight coefficients of the direction in which the resolution is degraded.

  16. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  17. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2015-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  18. Experimental demonstration of a star-field identification algorithm. [integrated with intelligent imaging camera

    NASA Technical Reports Server (NTRS)

    Scholl, M. S.

    1993-01-01

    A fault-tolerant, six-feature, all-sky star-field identification algorithm has been integrated with a CCD-based imaging camera. This autonomous intelligent camera identifies in real time any star field without a priori knowledge and requires a reference catalog incorporating fewer than 1000 stars. Observatory tests on star fields with this intelligent camera are described.

  19. LROC NAC Photometry as a Tool for Studying Physical and Compositional Properties of the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Clegg, R. N.; Jolliff, B. L.; Boyd, A. K.; Stopar, J. D.; Sato, H.; Robinson, M. S.; Hapke, B. W.

    2014-10-01

    LROC NAC photometry has been used to study the effects of rocket exhaust on lunar soil properties, and here we apply the same photometric methods to place compositional constraints on regions of silicic volcanism and pure anorthosite on the Moon.

  20. Color calibration of a CMOS digital camera for mobile imaging

    NASA Astrophysics Data System (ADS)

    Eliasson, Henrik

    2010-01-01

    As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g., elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration, the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the same time, the demands for efficiency in the production environment require the calibration to be as simple as possible. Thus it is important to find the correct balance between image quality and production efficiency requirements. The purpose of this work is to investigate camera color variations using a simple model where the sensor and IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65 and F11. The sensor variations were determined from a very large population from which 6 corner samples were picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting images generated by the model were then analyzed in the CIELAB space and color errors were calculated using the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that the color temperature dependence is small enough to justify the use of only one light source in a production environment.

  1. Arthropod eye-inspired digital camera with unique imaging characteristics

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-06-01

    In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  2. Depth and focused image recovery from defocused images for cameras operating in macro mode

    NASA Astrophysics Data System (ADS)

    Tu, Xue; Kang, Youn-sik; Subbarao, Murali

    2007-09-01

    Depth From Defocus (DFD) is a depth recovery method that needs only two defocused images recorded with different camera settings. In practice, this technique is found to have good accuracy for cameras operating in normal mode. In this paper, we present new algorithms to extend the DFD method to cameras working in macro mode used for very close objects in a distance range of 5 cm to 20 cm. We adopted a new lens position setting suitable for macro mode to avoid serious blurring. We also developed a new calibration algorithm to normalize magnification of images captured with different lens positions. In some range intervals with high error sensitivity, we used an additional image to reduce the error caused by drastic change of lens settings. After finding the object depth, we used the corresponding blur parameter for computing the focused image through image restoration, which is termed as "soft-focusing". Experimental results on a high-end digital camera show that the new algorithms significantly improve the accuracy of DFD in the macro mode. In terms of focusing accuracy, the RMS error is about 15 lens steps out of 1500 steps, which is around 1%.

  3. Modeling camera orientation and 3D structure from a sequence of images taken by a perambulating commercial video camera

    NASA Astrophysics Data System (ADS)

    M-Rouhani, Behrouz; Anderson, James A. D. W.

    1997-04-01

    In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

  4. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  5. Solid state television camera has no imaging tube

    NASA Technical Reports Server (NTRS)

    Huggins, C. T.

    1972-01-01

    Camera with characteristics of vidicon camera and greater resolution than home TV receiver uses mosaic of phototransistors. Because of low power and small size, camera has many applications. Mosaics can be used as cathode ray tubes and analog-to-digital converters.

  6. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    NASA Astrophysics Data System (ADS)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  7. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    SciTech Connect

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  8. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  9. A CCD CAMERA-BASED HYPERSPECTRAL IMAGING SYSTEM FOR STATIONARY AND AIRBORNE APPLICATIONS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes a charge coupled device (CCD) camera-based hyperspectral imaging system designed for both stationary and airborne remote sensing applications. The system consists of a high performance digital CCD camera, an imaging spectrograph, an optional focal plane scanner, and a PC comput...

  10. A low-cost dual-camera imaging system for aerial applicators

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  11. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  12. Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser.

    PubMed

    Golub, Michael A; Averbuch, Amir; Nathan, Menachem; Zheludev, Valery A; Hauser, Jonathan; Gurevitch, Shay; Malinsky, Roman; Kagan, Asaf

    2016-01-20

    We propose a spectral imaging method that allows a regular digital camera to be converted into a snapshot spectral imager by equipping the camera with a dispersive diffuser and with a compressed sensing-based algorithm for digital processing. Results of optical experiments are reported. PMID:26835914

  13. Faint Object Camera imaging and spectroscopy of NGC 4151

    NASA Technical Reports Server (NTRS)

    Boksenberg, A.; Catchpole, R. M.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1995-01-01

    We describe ultraviolet and optical imaging and spectroscopy within the central few arcseconds of the Seyfert galaxy NGC 4151, obtained with the Faint Object Camera on the Hubble Space Telescope. A narrowband image including (O III) lambda(5007) shows a bright nucleus centered on a complex biconical structure having apparent opening angle approximately 65 deg and axis at a position angle along 65 deg-245 deg; images in bands including Lyman-alpha and C IV lambda(1550) and in the optical continuum near 5500 A, show only the bright nucleus. In an off-nuclear optical long-slit spectrum we find a high and a low radial velocity component within the narrow emission lines. We identify the low-velocity component with the bright, extended, knotty structure within the cones, and the high-velocity component with more confined diffuse emission. Also present are strong continuum emission and broad Balmer emission line components, which we attribute to the extended point spread function arising from the intense nuclear emission. Adopting the geometry pointed out by Pedlar et al. (1993) to explain the observed misalignment of the radio jets and the main optical structure we model an ionizing radiation bicone, originating within a galactic disk, with apex at the active nucleus and axis centered on the extended radio jets. We confirm that through density bounding the gross spatial structure of the emission line region can be reproduced with a wide opening angle that includes the line of sight, consistent with the presence of a simple opaque torus allowing direct view of the nucleus. In particular, our modelling reproduces the observed decrease in position angle with distance from the nucleus, progressing initially from the direction of the extended radio jet, through our optical structure, and on to the extended narrow-line region. We explore the kinematics of the narrow-line low- and high-velocity components on the basis of our spectroscopy and adopted model structure.

  14. High performance imaging streak camera for the National Ignition Facility.

    PubMed

    Opachich, Y P; Kalantar, D H; MacPhee, A G; Holder, J P; Kimbrough, J R; Bell, P M; Bradley, D K; Hatch, B; Brienza-Larsen, G; Brown, C; Brown, C G; Browning, D; Charest, M; Dewald, E L; Griffin, M; Guidry, B; Haugh, M J; Hicks, D G; Homoelle, D; Lee, J J; Mackinnon, A J; Mead, A; Palmer, N; Perfect, B H; Ross, J S; Silbernagel, C; Landen, O

    2012-12-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high electromagnetic interference. A train of temporal ultra-violet timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented. PMID:23278024

  15. High Performance Imaging Streak Camera for the National Ignition Facility

    SciTech Connect

    Opachich, Y. P.; Kalantar, D.; MacPhee, A.; Holder, J.; Kimbrough, J.; Bell, P. M.; Bradley, D.; Hatch, B.; Brown, C.; Landen, O.; Perfect, B. H.; Guidry, B.; Mead, A.; Charest, M.; Palmer, N.; Homoelle, D.; Browning, D.; Silbernagel, C.; Brienza-Larsen, G.; Griffin, M.; Lee, J. J.; Haugh, M. J.

    2012-01-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high EMI. A train of temporal UV timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented.

  16. Analysis of a multiple reception model for processing images from the solid-state imaging camera

    NASA Technical Reports Server (NTRS)

    Yan, T.-Y.

    1991-01-01

    A detection model to identify the presence of Galileo optical communications from an Earth-based Transmitter (GOPEX) signal by processing multiple signal receptions extracted from the camera images is described. The model decomposes a multi-signal reception camera image into a set of images so that the location of the pixel being illuminated is known a priori and the laser can illuminate only one pixel at each reception instance. Numerical results show that if effects on the pointing error due to atmospheric refraction can be controlled to between 20 to 30 microrad, the beam divergence of the GOPEX laser should be adjusted to be between 30 to 40 microrad when the spacecraft is 30 million km away from Earth. Furthermore, increasing beyond 5 the number of receptions for processing will not produce a significant detection probability advantage.

  17. Three-dimensional scene reconstruction using multiview images and depth camera

    NASA Astrophysics Data System (ADS)

    Um, Gi-Mun; Kim, Kang Yeon; Ahn, ChungHyun; Lee, Kwan Heng

    2005-03-01

    This paper presents a novel multi-depth map fusion approach for the 3D scene reconstruction. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. On the other hand, Depth map obtained from the depth camera is globally accurate but noisy and provides a limited depth range. In order to compensate pros and cons of these two methods, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. Using a 3-view camera system that includes a depth camera for the center-view, we first obtain 3-view images and a depth map from the center-view depth camera. Then we calculate camera parameters by camera calibration. Using the camera parameters, we rectify left and right-view images with respect to the center-view image for satisfying the well-known epipolar constraint. Using the center-view image as a reference, we obtain two depth maps by stereo matching between the center-left image pair and the center-right image pair. After preprocessing each depth map, we pick an appropriate depth value for each pixel from the processed depth maps based on the depth reliability. Simulation results obtained by our proposed method showed improvements in some background regions.

  18. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).

  19. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  20. Fly's Eye camera system: optical imaging using a hexapod platform

    NASA Astrophysics Data System (ADS)

    Jaskó, Attila; Pál, András.; Vida, Krisztián.; Mészáros, László; Csépány, Gergely; Mező, György

    2014-07-01

    The Fly's Eye Project is a high resolution, high coverage time-domain survey in multiple optical passbands: our goal is to cover the entire visible sky above the 30° horizontal altitude with a cadence of ~3 min. Imaging is going to be performed by 19 wide-field cameras mounted on a hexapod platform resembling a fly's eye. Using a hexapod developed and built by our team allows us to create a highly fault-tolerant instrument that uses the sky as a reference to define its own tracking motion. The virtual axis of the platform is automatically aligned with the Earth's rotational axis; therefore the same mechanics can be used independently from the geographical location of the device. Its enclosure makes it capable of autonomous observing and withstanding harsh environmental conditions. We briefly introduce the electrical, mechanical and optical design concepts of the instrument and summarize our early results, focusing on sidereal tracking. Due to the hexapod design and hence the construction is independent from the actual location, it is considerably easier to build, install and operate a network of such devices around the world.

  1. Hubble Space Telescope Planetary Camera images of R136

    NASA Technical Reports Server (NTRS)

    Campbell, Bel; Hunter, Deidre A.; Holtzman, Jon A.; Lauer, Tod R.; Shaya, Edward J.; Code, Arthur; Faber, S. M.; Groth, Edward J.; Light, Robert M.; Lynds, Roger

    1992-01-01

    Images obtained with the Planetary Camera on the HST is used here to study the stellar population of R136, the core of the 30 Doradus cluster. It is found that R136a, the brightest knot at the center of R136, is indeed a tight cluster of stars containing at least 12 components in a 1 arcsec region. Three of the stars are of the Wolf-Rayet (W-R) type. The brightest stars have luminosities consistent with their being massive O supergiants or W-R stars. The stellar mass density in R136a is at least a million tiems that of the solar neighborhood. In the larger region known as R136, the magnitudes of 214 stars are detected and measured. A color-magnitude diagram shows a range of stars from luminous O supergiants to ZAMS B3 stars. The diagram is very similar to that of stars outside of R136. A surface brightness profile constructed from stellar photometry is best fit by a pure power law.

  2. Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)

    NASA Technical Reports Server (NTRS)

    Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie

    2011-01-01

    The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.

  3. Cloud detection with the Earth Polychromatic Imaging Camera (EPIC)

    NASA Astrophysics Data System (ADS)

    Meyer, K.; Marshak, A.; Lyapustin, A.; Torres, O.; Wang, Y.

    2011-12-01

    The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.

  4. A new nuclear medicine scintillation camera based on image-intensifier tubes.

    PubMed

    Mulder, H; Pauwels, E K

    1976-11-01

    A large-field scintilation camera for nuclear medicine application has recently been developed by Old Delft. The system is based on a large-field image-intensifier tube preceded by a scintillator mosaic. A comparison is made with present state-of-the-art scintillation cameras in terms of modulation transfer function (MTF) and sensitivity. These parameters, which determine the performance of scintillation cameras, are not independent of each other. Therefore, a comparative evaluation should be made under well-defined and identical conditions. The new scintillation camera achieves considerable improvement in image quality. In fact, the intrinsic MTF of the new camera is rather close to unity in the spatial frequency range up to 1 line pair per centimeter (1p/cm). Further improvement would require a fundamentally new approach to gamma imaging, free of the limitations of conventional collimators (e.g., coded-aperture imaging techniques). PMID:978249

  5. High-speed camera with internal real-time image processing

    NASA Astrophysics Data System (ADS)

    Paindavoine, M.; Mosqueron, R.; Dubois, J.; Clerc, C.; Grapin, J. C.; Tomasini, F.

    2005-08-01

    High-speed video cameras are powerful tools for investigating for instance the dynamics of fluids or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs have made possible the development of high-speed video cameras offering digital outputs, readout flexibility and lower manufacturing costs. In this field, we designed a new fast CMOS camera with a 1280×1024 pixels resolution at 500 fps. In order to transmit from the camera only useful information from the fast images, we studied some specific algorithms like edge detection, wavelet analysis, image compression and object tracking. These image processing algorithms have been implemented into a FPGA embedded inside the camera. This FPGA technology allows us to process fast images in real time.

  6. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  7. A hybrid version of the Whipple observatory's air Cherenkov imaging camera for use in moonlight

    NASA Astrophysics Data System (ADS)

    Chantell, M. C.; Akerlof, C. W.; Badran, H. M.; Buckley, J.; Carter-Lewis, D. A.; Cawley, M. F.; Connaughton, V.; Fegan, D. J.; Fleury, P.; Gaidos, J.; Hillas, A. M.; Lamb, R. C.; Pare, E.; Rose, H. J.; Rovero, A. C.; Sarazin, X.; Sembroski, G.; Schubnell, M. S.; Urban, M.; Weekes, T. C.; Wilson, C.

    1997-02-01

    A hybrid version of the Whipple Observatory's atmospheric Cherenkov imaging camera that permits observation during periods of bright moonlight is described. The hybrid camera combines a blue-light blocking filter with the standard Whipple imaging camera to reduce sensitivity to wavelengths greater than 360 nm. Data taken with this camera are found to be free from the effects of the moonlit night-sky after the application of simple off-line noise filtering. This camera has been used to successfully detect TeV gamma rays, in bright moon light, from both the Crab Nebula and the active galactic nucleus Markarian 421 at the 4.9σ and 3.9σ levels of statistical significance, respectively. The energy threshold of the camera is estimated to be 1.1 ( +0.6/-0.3) TeV from Monte Carlo simulations.

  8. Method for quantifying image quality in push-broom hyperspectral cameras

    NASA Astrophysics Data System (ADS)

    Høye, Gudrun; Løke, Trond; Fridman, Andrei

    2015-05-01

    We propose a method for measuring and quantifying image quality in push-broom hyperspectral cameras in terms of spatial misregistration caused by keystone and variations in the point spread function (PSF) across spectral channels, and image sharpness. The method is suitable for both traditional push-broom hyperspectral cameras where keystone is corrected in hardware and cameras where keystone is corrected in postprocessing, such as resampling and mixel cameras. We show how the measured camera performance can be presented graphically in an intuitive and easy to understand way, comprising both image sharpness and spatial misregistration in the same figure. For the misregistration, we suggest that both the mean standard deviation and the maximum value for each pixel are shown. We also suggest how the method could be expanded to quantify spectral misregistration caused by the smile effect and corresponding PSF variations. Finally, we have measured the performance of two HySpex SWIR 384 cameras using the suggested method. The method appears well suited for assessing camera quality and for comparing the performance of different hyperspectral imagers and could become the future standard for how to measure and quantify the image quality of push-broom hyperspectral cameras.

  9. The Potential of Dual Camera Systems for Multimodal Imaging of Cardiac Electrophysiology and Metabolism

    PubMed Central

    Holcomb, Mark R.; Woods, Marcella C.; Uzelac, Ilija; Wikswo, John P.; Gilligan, Jonathan M.; Sidorov, Veniamin Y.

    2013-01-01

    Fluorescence imaging has become a common modality in cardiac electrodynamics. A single fluorescent parameter is typically measured. Given the growing emphasis on simultaneous imaging of more than one cardiac variable, we present an analysis of the potential of dual camera imaging, using as an example our straightforward dual camera system that allows simultaneous measurement of two dynamic quantities from the same region of the heart. The advantages of our system over others include an optional software camera calibration routine that eliminates the need for precise camera alignment. The system allows for rapid setup, dichroic image separation, dual-rate imaging, and high spatial resolution, and it is generally applicable to any two-camera measurement. This type of imaging system offers the potential for recording simultaneously not only transmembrane potential and intracellular calcium, two frequently measured quantities, but also other signals more directly related to myocardial metabolism, such as [K+]e, NADH, and reactive oxygen species, leading to the possibility of correlative multimodal cardiac imaging. We provide a compilation of dye and camera information critical to the design of dual camera systems and experiments. PMID:19657065

  10. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  11. Measurement of modulation transfer function for four types of imaging elements used in fast cameras

    SciTech Connect

    Estrella, R.M.; Sammons, T.J. . Amador Valley Operations); Thomas, S.W. )

    1991-01-01

    We have measured the modulation transfer function (MTF) of fiber- optic bundles (reducers), minifiers (inverting, electrostatically focused imaging tube reducers), microchannel plate image intensifiers (MCPIs), and streak tubes as part of our ongoing device evaluation program aimed at precise characterization of various imaging elements used in fast cameras. This paper describes our measurement equipment and techniques and shows plots of MTF measurements for each of four types of fast-camera elements tested. 6 refs., 9 figs.

  12. Cloud level winds from the Venus Express Monitoring Camera imaging

    NASA Astrophysics Data System (ADS)

    Khatuntsev, I. V.; Patsaeva, M. V.; Titov, D. V.; Ignatiev, N. I.; Turin, A. V.; Limaye, S. S.; Markiewicz, W. J.; Almeida, M.; Roatsch, Th.; Moissl, R.

    2013-09-01

    Six years of continuous monitoring of Venus by European Space Agency’s Venus Express orbiter provides an opportunity to study dynamics of the atmosphere our neighbor planet. Venus Monitoring Camera (VMC) on-board the orbiter has acquired the longest and the most complete so far set of ultra violet images of Venus. These images enable a study the cloud level circulation by tracking motion of the cloud features. The highly elliptical polar orbit of Venus Express provides optimal conditions for observations of the Southern hemisphere at varying spatial resolution. Out of the 2300 orbits of Venus Express over which the images used in the study cover about 10 Venus years. Out of these, we tracked cloud features in images obtained in 127 orbits by a manual cloud tracking technique and by a digital correlation method in 576 orbits. Total number of wind vectors derived in this work is 45,600 for the manual tracking and 391,600 for the digital method. This allowed us to determine the mean circulation, its long-term and diurnal trends, orbit-to-orbit variations and periodicities. We also present the first results of tracking features in the VMC near-IR images. In low latitudes the mean zonal wind at cloud tops (67 ± 2 km following: Rossow, W.B., Del Genio, A.T., Eichler, T. [1990]. J. Atmos. Sci. 47, 2053-2084) is about 90 m/s with a maximum of about 100 m/s at 40-50°S. Poleward of 50°S the average zonal wind speed decreases with latitude. The corresponding atmospheric rotation period at cloud tops has a maximum of about 5 days at equator, decreases to approximately 3 days in middle latitudes and stays almost constant poleward from 50°S. The mean poleward meridional wind slowly increases from zero value at the equator to about 10 m/s at 50°S and then decreases to zero at the pole. The error of an individual measurement is 7.5-30 m/s. Wind speeds of 70-80 m/s were derived from near-IR images at low latitudes. The VMC observations indicate a long term trend for the zonal wind speed at low latitudes to increase from 85 m/s in the beginning of the mission to 110 m/s by the middle of 2012. VMC UV observations also showed significant short term variations of the mean flow. The velocity difference between consecutive orbits in the region of mid-latitude jet could reach 30 m/s that likely indicates vacillation of the mean flow between jet-like regime and quasi-solid body rotation at mid-latitudes. Fourier analysis revealed periodicities in the zonal circulation at low latitudes. Within the equatorial region, up to 35°S, the zonal wind show an oscillation with a period of 4.1-5 days (4.83 days on average) that is close to the super-rotation period at the equator. The wave amplitude is 4-17 m/s and decreases with latitude, a feature of the Kelvin wave. The VMC observations showed a clear diurnal signature. A minimum in the zonal speed was found close to the noon (11-14 h) and maxima in the morning (8-9 h) and in the evening (16-17 h). The meridional component peaks in the early afternoon (13-15 h) at around 50°S latitude. The minimum of the meridional component is located at low latitudes in the morning (8-11 h). The horizontal divergence of the mean cloud motions associated with the diurnal pattern suggests upwelling motions in the morning at low latitudes and downwelling flow in the afternoon in the cold collar region.

  13. D Hyperspectral Frame Imager Camera Data in Photogrammetric Mosaicking

    NASA Astrophysics Data System (ADS)

    Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J.; Soukkamäki, J.

    2013-08-01

    A new 2D hyperspectral frame camera system has been developed by VTT (Technical Research Center of Finland) and Rikola Ltd. It contains frame based and very light camera with RGB-NIR sensor and it is suitable for light weight and cost effective UAV planes. MosaicMill Ltd. has converted the camera data into proper format for photogrammetric processing, and camera's geometrical accuracy and stability are evaluated to guarantee required accuracies for end user applications. MosaicMill Ltd. has also applied its' EnsoMOSAIC technology to process hyperspectral data into orthomosaics. This article describes the main steps and results on applying hyperspectral sensor in orthomosaicking. The most promising results as well as challenges in agriculture and forestry are also described.

  14. Fabry-Perot interferometry using an image-intensified rotating-mirror streak camera

    SciTech Connect

    Seitz, W.L.; Stacy, H.L.

    1983-01-01

    A Fabry-Perot velocity interferometer system is described that uses a modified rotating mirror streak camera to recrod the dynamic fringe positions. A Los Alamos Model 72B rotating-mirror streak camera, equipped with a beryllium mirror, was modified to include a high aperture (f/2.5) relay lens and a 40-mm image-intensifier tube such that the image normally formed at the film plane of the streak camera is projected onto the intensifier tube. Fringe records for thin (0.13 mm) flyers driven by a small bridgewire detonator obtained with a Model C1155-01 Hamamatsu and Model 790 Imacon electronic streak cameras are compared with those obtained with the image-intensified rotating-mirror streak camera (I/sup 2/RMC). Resolution comparisons indicate that the I/sup 2/RMC gives better time resolution than either the Hamamatsu or the Imacon for total writing times of a few microseconds or longer.

  15. Face acquisition camera design using the NV-IPM image generation tool

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.; Choi, Hee-Sue; Reynolds, Joseph P.

    2015-05-01

    In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.

  16. The advantages of using a Lucky Imaging camera for observations of microlensing events

    NASA Astrophysics Data System (ADS)

    Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus

    2016-05-01

    In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky Imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, for an Earth-mass planet in the resonance regime, where the detection probability in crowded fields is smaller, Lucky Imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.

  17. The advantages of using a Lucky Imaging camera for observations of microlensing events

    NASA Astrophysics Data System (ADS)

    Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus

    2016-03-01

    In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, For an Earth-mass planet in the resonance regime, where the detection probability in crowded-fields is smaller, lucky imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.

  18. Mass movement slope streaks imaged by the Mars Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Sullivan, Robert; Thomas, Peter; Veverka, Joseph; Malin, Michael; Edgett, Kenneth S.

    2001-10-01

    Narrow, fan-shaped dark streaks on steep Martian slopes were originally observed in Viking Orbiter images, but a definitive explanation was not possible because of resolution limitations. Pictures acquired by the Mars Orbiter Camera (MOC) aboard the Mars Global Surveyor (MGS) spacecraft show innumerable examples of dark slope streaks distributed widely, but not uniformly, across the brighter equatorial regions, as well as individual details of these features that were not visible in Viking Orbiter data. Dark slope streaks (as well as much rarer bright slope streaks) represent one of the most widespread and easily recognized styles of mass movement currently affecting the Martian surface. New dark streaks have formed since Viking and even during the MGS mission, confirming earlier suppositions that higher contrast dark streaks are younger, and fade (brighten) with time. The darkest slope streaks represent ~10% contrast with surrounding slope materials. No small outcrops supplying dark material (or bright material, for bright streaks) have been found at streak apexes. Digitate downslope ends indicate slope streak formation involves a ground-hugging flow subject to deflection by minor topographic obstacles. The model we favor explains most dark slope streaks as scars from dust avalanches following oversteepening of air fall deposits. This process is analogous to terrestrial avalanches of oversteepened dry, loose snow which produce shallow avalanche scars with similar morphologies. Low angles of internal friction typically 10-30¡ for terrestrial loess and clay materials suggest that mass movement of (low-cohesion) Martian dusty air fall is possible on a wide range of gradients. Martian gravity, presumed low density of the air fall deposits, and thin (unresolved by MOC) failed layer depths imply extremely low cohesive strength at time of failure, consistent with expectations for an air fall deposit of dust particles. As speed increases during a dust avalanche, a growing fraction of the avalanching dust particles acquires sufficient kinetic energy to be lost to the atmosphere in suspension, limiting the momentum of the descending avalanche front. The equilibrium speed, where rate of mass lost to the atmosphere is balanced by mass continually entrained as the avalanche front descends, decreases with decreasing gradient. This mechanism explains observations from MOC images indicating slope streaks formed with little reserve kinetic energy for run-outs on to valley floors and explains why large distal deposits of displaced material are not found at downslope streak ends. The mass movement process of dark (and bright) slope streak formation through dust avalanches involves renewable sources of dust only, leaving underlying slope materials unaffected. Areas where dark and bright slope streaks currently form and fade in cycles are closely correlated with low thermal inertia and probably represent regions where dust currently is accumulating, not just residing.

  19. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  20. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  1. Comparison of three thermal cameras with canine hip area thermographic images.

    PubMed

    Vainionpää, Mari; Raekallio, Marja; Tuhkalainen, Elina; Hänninen, Hannele; Alhopuro, Noora; Savolainen, Maija; Junnila, Jouni; Hielm-Björkman, Anna; Snellman, Marjatta; Vainio, Outi

    2012-12-01

    The objective of this study was to compare the method of thermography by using three different resolution thermal cameras and basic software for thermographic images, separating the two persons taking the thermographic images (thermographers) from the three persons interpreting the thermographic images (interpreters). This was accomplished by studying the repeatability between thermographers and interpreters. Forty-nine client-owned dogs of 26 breeds were enrolled in the study. The thermal cameras used were of different resolutions-80 × 80, 180 × 180 and 320 × 240 pixels. Two trained thermographers took thermographic images of the hip area in all dogs using all three cameras. A total of six thermographic images per dog were taken. The thermographic images were analyzed using appropriate computer software, FLIR QuickReport 2.1. Three trained interpreters independently evaluated the mean temperatures of hip joint areas of the six thermographic images for each dog. The repeatability between thermographers was >0.975 with the two higher-resolution cameras and 0.927 with the lowest resolution camera. The repeatability between interpreters was >0.97 with each camera. Thus, the between-interpreter variation was small. The repeatability between thermographers and interpreters was considered high enough to encourage further studies with thermographic imaging in dogs. PMID:22785576

  2. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  3. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System.

    PubMed

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  4. A Prediction Method of TV Camera Image for Space Manual-control Rendezvous and Docking

    NASA Astrophysics Data System (ADS)

    Zhen, Huang; Qing, Yang; Wenrui, Wu

    Space manual-control rendezvous and docking (RVD) is a key technology for accomplishing the RVD mission in manned space engineering, especially when automatic control system is out of work. The pilot on chase spacecraft manipulates the hand-stick by the image of target spacecraft captured by TV camera. From the TV image, the relative position and attitude of chase and target spacecrafts can be shown. Therefore, the size, the position, the brightness and the shadow of the target on TV camera are key to guarantee the success of manual-control RVD. A method of predicting the on-orbit TV camera image at different relative positions and light conditions during the process of RVD is discussed. Firstly, the basic principle of capturing the image of cross drone on target spacecraft by TV camera is analyzed theoretically, based which the strategy of manual-control RVD is discussed in detail. Secondly, the relationship between the displayed size or position and the real relative distance of chase and target spacecrafts is presented, the brightness and reflection by the target spacecraft at different light conditions are decribed, the shadow on cross drone caused by the chase or target spacecraft is analyzed. Thirdly, a prediction method of on-orbit TV camera images at certain orbit and light condition is provided, and the characteristics of TV camera image during the RVD is analyzed. Finally, the size, the position, the brightness and the shadow of target spacecraft on TV camera image at typical orbit is simulated. The result, by comparing the simulated images with the real images captured by the TV camera on Shenzhou manned spaceship , shows that the prediction method is reasonable

  5. Single camera imaging system for color and near-infrared fluorescence image guided surgery

    PubMed Central

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-01-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 μM can be detected when the excitation irradiance is 3.92 mW/cm2 at an exposure time of 10 ms. PMID:25136502

  6. Phenological research using digital image archives: how important is camera system choice?

    NASA Astrophysics Data System (ADS)

    Sonnentag, O.; Hufkens, K.; Teshera-Sterne, C.; Young, A. M.; Richardson, A. D.

    2010-12-01

    Phenological research has been improved by continuous automated monitoring of vegetation canopies using digital cameras and webcams. Most cameras used for this purpose have a native capture system in the red (R) - green (G) - blue (B) color space, which can be used for simple visual inspection but also for separate extraction of color information as RGB digital numbers that allow for quantitative analysis of vegetation status. One overlooked aspect is the choice of appropriate camera system. Ultimately, camera system choice together with atmospheric and illumination conditions dictates image quality (e.g., sharpness, noise and dynamic range, color accuracy and balance), and thus the usefulness of color information for phenological research. In addition, no standardized protocol exists regarding the extraction of representative RGB time series from digital image archives. In this research we compare image archives (fall 2010) obtained at a temperate broadleaf forest (Harvard Forest) with different types of digital cameras and webcams with different image sensors (i.e., CMOS vs. CCD) to assess the impact of image quality on color information for phenological research. Furthermore, we developed a protocol based on moving window quantiles to extract daily RGB time series to maximize the phenological information content of image archives. Preliminary results suggest that image quality and thus camera system choice is of secondary importance compared to the technique used to extract robust daily RGB time series for phenological research in temperate broadleaf forests.

  7. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  8. Development of gamma ray imaging cameras. Progress report for second year

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R&D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera`s orientation, while the brightness and ``color`` would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project`s two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R&D efforts for the third year effort. 8 refs.

  9. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  10. Myocardial Perfusion Imaging with a Solid State Camera: Simulation of a Very Low Dose Imaging Protocol

    PubMed Central

    Nakazato, Ryo; Berman, Daniel S.; Hayes, Sean W.; Fish, Mathews; Padgett, Richard; Xu, Yuan; Lemley, Mark; Baavour, Rafael; Roth, Nathaniel; Slomka, Piotr J.

    2012-01-01

    High sensitivity dedicated cardiac systems cameras provide an opportunity to lower injected doses for SPECT myocardial perfusion imaging (MPI), but the exact limits for lowering doses have not been determined. List mode data acquisition allows for reconstruction of various fractions of acquired counts, allowing a simulation of gradually lower administered dose. We aimed to determine the feasibility of very low dose MPI by exploring the minimal count level in the myocardium for accurate MPI. Methods Seventy nine patients were studied (mean body mass index 30.0 ± 6.6, range 20.2–54.0 kg/m2) who underwent 1-day standard dose 99mTc-sestamibi exercise or adenosine rest/stress MPI for clinical indications employing a Cadmium Zinc Telluride dedicated cardiac camera. Imaging time was 14-min with 803 ± 200 MBq (21.7 ± 5.4mCi) of 99mTc injected at stress. To simulate clinical scans with lower dose at that imaging time, we reframed the list-mode raw data to have count fractions of the original scan. Accordingly, 6 stress equivalent datasets were reconstructed corresponding to each fraction of the original scan. Automated QPS/QGS software was used to quantify total perfusion deficit (TPD) and ejection fraction (EF) for all 553 datasets. Minimal acceptable count was determined based on previous report with repeatability of same-day same-injection Anger camera studies. Pearson correlation coefficients and SD of differences with TPD for all scans were calculated. Results The correlations of quantitative perfusion and function analysis were excellent for both global and regional analysis on all simulated low-counts scans (all r ≥0.95, p<0.0001). Minimal acceptable count was determined to be 1.0 million counts for the left ventricular region. At this count level, SD of differences was 1.7% for TPD and 4.2% for EF. This count level would correspond to a 92.5 MBq (2.5 mCi) injected dose for the 14 min acquisition. Conclusion 1.0 million myocardial count images appear to be sufficient to maintain excellent agreement quantitative perfusion and function parameters as compared to those determined from 8.0 million count images. With a dedicated cardiac camera, these images could be obtained over 10 minutes with an effective radiation dose of less than 1 mSv without significant sacrifice in accuracy. PMID:23321457

  11. A 58 x 62 pixel Si:Ga array camera for 5 - 14 micron astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Folz, W. C.; Woods, L. A.; Wooldridge, J. B.

    1989-01-01

    A new infrared array camera system has been successfully applied to high background 5 - 14 micron astronomical imaging photometry observations, using a hybrid 58 x 62 pixel Si:Ga array detector. The off-axis reflective optical design incorporating a parabolic camera mirror, circular variable filter wheel, and cold aperture stop produces diffraction-limited images with negligible spatial distortion and minimum thermal background loading. The camera electronic system architecture is divided into three subsystems: (1) high-speed analog front end, including 2-channel preamp module, array address timing generator, bias power suppies, (2) two 16 bit, 3 microsec per conversion A/D converters interfaced to an arithmetic array processor, and (3) an LSI 11/73 camera control and data analysis computer. The background-limited observational noise performance of the camera at the NASA/IRTF telescope is NEFD (1 sigma) = 0.05 Jy/pixel min exp 1/2.

  12. A New Lunar Atlas: Mapping the Moon with the Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Robinson, M. S.; Boyd, A.; Sato, H.

    2012-12-01

    The Lunar Reconnaissance Orbiter (LRO) spacecraft launched in June 2009 and began systematically mapping the lunar surface and providing a priceless dataset for the planetary science community and future mission planners. From 20 September 2009 to 11 December 2011, the spacecraft was in a nominal 50 km polar orbit, except for two one-month long periods when a series of spacecraft maneuvers enabled low attitude flyovers (as low as 22 km) of key exploration and scientifically interesting targets. One of the instruments, the Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) [1], captured nearly continuous synoptic views of the illuminated lunar surface. The WAC is a 7-band (321, 360, 415, 566, 604, 643, 689 nm) push frame imager with field of view of 60° in color mode and 90° in monochrome mode. This broad field of view enables the WAC to reimage nearly 50% (at the equator where the orbit tracks our spaced the furthest) of the terrain it imaged in the previous orbit. The visible bands of map projected WAC images have a pixel scale of 100 m, while UV bands have a pixel scale of 400 m due to 4x4 pixel on-chip binning that increases signal-to-noise. The nearly circular polar orbit and short (two hour) orbital periods enable seamless mosaics of broad areas of the surface with uniform lighting and resolution. In March of 2011, the LROC team released the first version of the global monochrome (643nm) morphologic map [2], which was comprised of 15,000 WAC images collected over three periods. With the over 130,000 WAC images collected while the spacecraft was in the 50 km orbit, a new set of mosaics are being produced by the LROC Team and will be released to the Planetary Data Systems. These new maps include an updated morphologic map with an improved set of images (limiting illumination variations and gores due to off-nadir observation of other instruments) and a new photometric correction derived from the LROC WAC dataset. In addition, a higher sun (lower incidence angle) mosaic will also be released. This map has minimal shadows and highlights albedo differences. In addition, seamless regional WAC mosaics acquired under multiple lighting geometries (Sunlight coming from the East, overhead, and West) will also be produced for key areas of interest. These new maps use the latest terrain model (LROC WAC GLD100) [3], updated spacecraft ephemeris provided by the LOLA team [4], and improved WAC distortion model [5] to provide accurate placement of each WAC pixel on the lunar surface. References: [1] Robinson et al. (2010) Space Sci. Rev. [2] Speyerer et al. (2011) LPSC, #2387. [3] Scholten et al. (2012) JGR. [4] Mazarico et al. (2012) J. of Geodesy [5] Speyerer et al. (2012) ISPRS Congress.

  13. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  14. Application of spatial frequency response as a criterion for evaluating thermal imaging camera performance

    NASA Astrophysics Data System (ADS)

    Lock, Andrew; Amon, Francine

    2008-04-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating spatial resolution using an application of Spatial Frequency Response (SFR) calculations for thermal imaging. According to ISO 12233, the SFR is defined as the integrated area below the Modulation Transfer Function (MTF) curve derived from the discrete Fourier transform of a camera image representing a knife-edge target. This concept is modified slightly for use as a quantitative analysis of the camera's performance by integrating the area between the MTF curve and the camera's characteristic nonuniformity, or noise floor, determined at room temperature. The resulting value, which is termed the Effective SFR, can then be compared with a spatial resolution value obtained from human perception testing of task specific situations to determine the acceptability of the performance of thermal imaging cameras. The testing procedures described herein are being developed as part of a suite of tests for possible inclusion into a performance standard on thermal imaging cameras for first responders.

  15. Digital-image capture system for the IR camera used in Alcator C-Mod

    SciTech Connect

    Maqueda, R. J.; Wurden, G. A.; Terry, J. L.; Gaffke, J.

    2001-01-01

    An infrared imaging system, based on an Amber Radiance 1 infrared camera, is used at Alcator C-Mod to measure the surface temperatures in the lower divertor region. Due to the supra-linear dependence of the thermal radiation with temperature it is important to make use of the 12-bit digitization of the focal plane array of the Amber camera and not be limited by the 8 bits inherent to the video signal. It is also necessary for the image capture device (i.e., fast computer) to be removed from the high magnetic field environment surrounding the experiment. Finally, the coupling between the digital camera output and the capture device should be nonconductive for isolation purposes (i.e., optical coupling). A digital video remote camera interface (RCI) coupled to a PCI bus fiber optic interface board is used to accomplish this task. Using this PCI-RCI system, the 60 Hz images from the Amber Radiance 1 camera, each composed of 256x256 pixels and 12 bits/pixel, are captured by a Windows NT computer. An electrical trigger signal is given directly to the RCI module to synchronize the image stream with the experiment. The RCI can be programmed from the host computer to work with a variety of digital cameras, including the Amber Radiance 1 camera.

  16. Digital-image capture system for the IR camera used in Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Maqueda, R. J.; Wurden, G. A.; Terry, J. L.; Gaffke, J.

    2001-01-01

    An infrared imaging system, based on an Amber Radiance 1 infrared camera, is used at Alcator C-Mod to measure the surface temperatures in the lower divertor region. Due to the supra-linear dependence of the thermal radiation with temperature it is important to make use of the 12-bit digitization of the focal plane array of the Amber camera and not be limited by the 8 bits inherent to the video signal. It is also necessary for the image capture device (i.e., fast computer) to be removed from the high magnetic field environment surrounding the experiment. Finally, the coupling between the digital camera output and the capture device should be nonconductive for isolation purposes (i.e., optical coupling). A digital video remote camera interface (RCI) coupled to a PCI bus fiber optic interface board is used to accomplish this task. Using this PCI-RCI system, the 60 Hz images from the Amber Radiance 1 camera, each composed of 256×256 pixels and 12 bits/pixel, are captured by a Windows NT computer. An electrical trigger signal is given directly to the RCI module to synchronize the image stream with the experiment. The RCI can be programmed from the host computer to work with a variety of digital cameras, including the Amber Radiance 1 camera.

  17. Fast image acquisition and processing on a TV camera-based portal imaging system.

    PubMed

    Baier, Kurt; Meyer, Jürgen

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). PMID:16008082

  18. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  19. Be Foil "Filter Knee Imaging" NSTX Plasma with Fast Soft X-ray Camera

    SciTech Connect

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-08-08

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28{sup o}) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip.

  20. Using focused plenoptic cameras for rich image capture.

    PubMed

    Georgiev, T; Lumsdaine, A; Chunev, G

    2011-01-01

    This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture. PMID:24807971

  1. DEFINITION OF AIRWAY COMPOSITION WITHIN GAMMA CAMERA IMAGES

    EPA Science Inventory

    The efficacies on inhaled pharmacologic drugs in the prophylaxis and treatment if airway diseases could be improved if particles were selectively directed to appropriate Sites. n the medical arena, planar gamma scintillation cameras may be employed to study factors affecting such...

  2. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  3. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  4. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  5. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Sims, G.; Toerne, K.; Rodenburg, K.; Croft, M.; Haugh, M. J.; Charest, M. R.; Romano, E. D.; Jacoby, K. D.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  6. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  7. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  8. Correction method for fisheye image based on the virtual small-field camera.

    PubMed

    Huang, Fuyu; Shen, Xueju; Wang, Qun; Zhou, Bing; Hu, Wengang; Shen, Hongbin; Li, Li

    2013-05-01

    A distortion correction method for a fisheye image is proposed based on the virtual small-field (SF) camera. The correction experiment is carried out, and a comparison is made between the proposed method and the conventional global correction method. From the experimental results, the corrected image by this method satisfies the law of perspective projection, and the image looks as if it was captured by an SF camera with the optical axis pointing at the corrected center. This method eliminates the phenomena of center compression, edge stretch, and field loss, and the image character is more obvious, which benefits the afterward target detection and information extraction. PMID:23632495

  9. Suite of proposed imaging performance metrics and test methods for fire service thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Lock, Andrew; Bryner, Nelson

    2008-04-01

    The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for design and performance requirements for TIC as used by the fire service. This standard will include performance requirements for TIC design robustness and image quality. The National Institute of Standards and Technology facilitates this process by providing recommendations for science-based performance metrics and test methods to the NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and test methods based on the harsh operating environment and limitations of use particular to the fire service has been proposed for inclusion in the standard. The performance metrics include large area contrast, effective temperature range, spatial resolution, nonuniformity, and thermal sensitivity. Test methods to measure TIC performance for these metrics are in various stages of development. An additional procedure, image recognition, has also been developed to facilitate the evaluation of TIC design robustness. The pass/fail criteria for each of these imaging performance metrics are derived from perception tests in which image contrast, brightness, noise, and spatial resolution are degraded to the point that users can no longer consistently perform tasks involving TIC due to poor image quality.

  10. Evaluation of detector material and radiation source position on Compton camera's ability for multitracer imaging.

    PubMed

    Uche, C Z; Round, W H; Cree, M J

    2012-09-01

    We present a study on the effects of detector material, radionuclide source and source position on the Compton camera aimed at realistic characterization of the camera's performance in multitracer imaging as it relates to brain imaging. The GEANT4 Monte Carlo simulation software was used to model the physics of radiation transport and interactions with matter. Silicon (Si) and germanium (Ge) detectors were evaluated for the scatterer, and cadmium zinc telluride (CZT) and cerium-doped lanthanum bromide (LaBr(3):Ce) were considered for the absorber. Image quality analyses suggest that the use of Si as the scatterer and CZT as the absorber would be preferred. Nevertheless, two simulated Compton camera models (Si/CZT and Si/LaBr(3):Ce Compton cameras) that are considered in this study demonstrated good capabilities for multitracer imaging in that four radiotracers within the nuclear medicine energy range are clearly visualized by the cameras. It is found however that beyond a range difference of about 2 cm for (113m)In and (18)F radiotracers in a brain phantom, there may be a need to rotate the Compton camera for efficient brain imaging. PMID:22829298

  11. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  12. In-plane displacement and strain measurements using a camera phone and digital image correlation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2014-05-01

    In-plane displacement and strain measurements of planar objects by processing the digital images captured by a camera phone using digital image correlation (DIC) are performed in this paper. As a convenient communication tool for everyday use, the principal advantages of a camera phone are its low cost, easy accessibility, and compactness. However, when used as a two-dimensional DIC system for mechanical metrology, the assumed imaging model of a camera phone may be slightly altered during the measurement process due to camera misalignment, imperfect loading, sample deformation, and temperature variations of the camera phone, which can produce appreciable errors in the measured displacements. In order to obtain accurate DIC measurements using a camera phone, the virtual displacements caused by these issues are first identified using an unstrained compensating specimen and then corrected by means of a parametric model. The proposed technique is first verified using in-plane translation and out-of-plane translation tests. Then, it is validated through a determination of the tensile strains and elastic properties of an aluminum specimen. Results of the present study show that accurate DIC measurements can be conducted using a common camera phone provided that an adequate correction is employed.

  13. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  14. Optimal geometrical configuration of a double-scattering compton camera for maximum imaging resolution and sensitivity

    NASA Astrophysics Data System (ADS)

    Seo, Hee; Lee, Se Hyung; Kim, Chan Hyeong; An, So Hyun; Lee, Ju Hahn; Lee, Chun Sik

    2008-06-01

    A novel type of Compton camera, called a double-scattering Compton imager (DOCI), is under development for nuclear medicine and molecular imaging applications. Two plane-type position-sensitive semiconductor detectors are employed as the scatterer detectors, and a 3″×3″ cylindrical NaI(Tl) scintillation detector is employed as the absorber detector. This study determined the optimal geometrical configuration of these component detectors to maximize the performance of the Compton camera in imaging resolution and sensitivity. To that end, the Compton camera was simulated very realistically, with the GEANT4 detector simulation toolkit, including various detector characteristics such as energy resolution, spatial resolution, energy discrimination, and Doppler energy broadening. According to our simulation results, the Compton camera is expected to show its maximum performance when the two scatterer detectors are positioned in parallel, with ˜8 cm of separation. The Compton camera will show the maximum performance also when the gamma-ray energy is about 500 keV, which suggests that the Compton camera is a suitable device to image the distribution of the positron emission tomography (PET) isotopes in the human body.

  15. Hybrid Compton camera/coded aperture imaging system

    SciTech Connect

    Mihailescu, Lucian; Vetter, Kai M.

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  16. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  17. High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications

    SciTech Connect

    Yates, G.J.; King, N.S.P.

    1994-08-01

    This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.

  18. ``Calibration-on-the-spot'': How to calibrate an EMCCD camera from its images

    NASA Astrophysics Data System (ADS)

    Mortensen, Kim I.; Flyvbjerg, Henrik

    In localization-based microscopy, super-resolution is obtained by analyzing isolated diffraction-limited spots imaged, typically, with EMCCD cameras. To compare experiments and calculate localization precision, the photon-to-signal amplification factor is needed but unknown without a calibration of the camera. Here we show how this can be done post festum from just a recorded image. We demonstrate this (i) theoretically, mathematically, (ii) by analyzing images recorded with an EMCCD camera, and (iii) by analyzing simulated EMCCD images for which we know the true values of parameters. In summary, our method of calibration-on-the-spot allows calibration of a camera with unknown settings from old images on file, with no other info needed. Consequently, calibration-on-the-spot also makes future camera calibrations before and after measurements unnecessary, because the calibration is encoded in recorded images during the measurement itself, and can at any later time be decoded with calibration-on-the-spot. This work was supported by the Lundbeck Foundation and the Danish Council for Strategic Research Grant No. 10-092322 (PolyNano).

  19. Detection of Recurrent Brain Tumor. Comparison of MR Registered Camera-Based and Dedicated PET Images.

    PubMed

    Coleman, R E.; Hawk, T C.; Hamblen, S M.; Cnmt; Laymon, C M.; Turkington, T G.

    1999-01-01

    The purpose of this study is to compare the quality of images and the results of camera-based and dedicated position emission tomography (PET) in the same patients with suspected recurrent or persistent brain tumor after therapy. Both PET studies were interpreted using registration with contrast-enhanced magnetic resonance imaging (MRI) studies. Twenty-three patients with 24 contrast-enhancing lesions by MRI were included. Camera-based PET images were more difficult to register and resulted in less accurate automated determination of the edge because of image noise. Dedicated PET images demonstrated better gray matter to white matter discrimination in every patient. Camera-based PET identified tumor in 17 of 19 lesions that were abnormal for tumor by dedicated PET. Camera-based PET identified absence of tumor in 4 of 5 lesions considered negative for tumor by dedicated PET. Thus, despite the limitations related to camera-based PET, the overall concordance of interpretation using MRI registered images is good. PMID:14516554

  20. Design and evaluation of gamma imaging systems of Compton and hybrid cameras

    NASA Astrophysics Data System (ADS)

    Feng, Yuxin

    Systems for imaging and spectroscopy of gamma-ray emission have been widely applied in environment and medicine applications. The superior performance of LaBr3:Ce detectors established them as excellent candidates for imaging and spectroscopy of gamma-rays. In this work, Compton cameras and hybrid cameras with a two-plane array of LaBr3:Ce detectors, one for the scattering and one for the absorbing detector arrays were designed and investigated. The feasibility of using LaBr3 in Compton cameras was evaluated with a bench top experiment in which two LaBr3:Ce detectors were arranged to mimic a Compton camera with one scattering and eight absorbing detectors. In the hybrid system the combination of the imaging methods of Compton and coded aperture cameras enables the system to cover the energy range of approximately 100 keV to a few MeV with good efficiency and angular resolution. The imaging performance of the hybrid imaging system was evaluated via Monte Carlo simulations. The image reconstruction algorithms of direct back-projections were applied for instant or real time imaging applications; this imaging system is capable of achieving an angular resolution of approximately 0.3 radians (17°). With image reconstruction algorithms of Expectation Maximized Likelihood, the image quality was improved to approximately 0.1 radians (or 6°). For medical applications in proton therapy, a Compton camera system to image the gamma-ray emission during treatment was designed and investigated. Gamma rays and X-rays emitted during treatment illustrate the energy deposition along the path of the proton beams and provide an opportunity for online dose verification. This Compton camera is designed to be capable of imaging gamma rays in 3D and is one of the candidates for imaging gamma emission during the treatment of proton therapy beside of the approach of positron emission tomography. In order to meet the requirement for spatial resolution of approximately 5 mm or less to meaningfully verify the dose via imaging gamma rays of 511 keV to 2 MeV, position sensing techniques with pixilated LaBr3 (Ce) crystal were applied in each detector. The pixilated LaBr3 (Ce) crystal was used in both the scattering and absorbing detectors. Image reconstruction algorithms of OS-EML were applied to obtain 3D images.

  1. Design Considerations Of A Compton Camera For Low Energy Medical Imaging

    NASA Astrophysics Data System (ADS)

    Harkness, L. J.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Grint, A. N.; Lazarus, I.; Judson, D. S.; Nolan, P. J.; Oxley, D. C.; Simpson, J.

    2009-12-01

    Development of a Compton camera for low energy medical imaging applications is underway. The ProSPECTus project aims to utilize position sensitive detectors to generate high quality images using electronic collimation. This method has the potential to significantly increase the imaging efficiency compared with mechanically collimated SPECT systems, a highly desirable improvement on clinical systems. Design considerations encompass the geometrical optimisation and evaluation of image quality from the system which is to be built and assessed.

  2. Design Considerations Of A Compton Camera For Low Energy Medical Imaging

    SciTech Connect

    Harkness, L. J.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Grint, A. N.; Judson, D. S.; Nolan, P. J.; Oxley, D. C.; Lazarus, I.; Simpson, J.

    2009-12-02

    Development of a Compton camera for low energy medical imaging applications is underway. The ProSPECTus project aims to utilize position sensitive detectors to generate high quality images using electronic collimation. This method has the potential to significantly increase the imaging efficiency compared with mechanically collimated SPECT systems, a highly desirable improvement on clinical systems. Design considerations encompass the geometrical optimisation and evaluation of image quality from the system which is to be built and assessed.

  3. The Nimbus image dissector camera system - An evaluation of its meteorological applications.

    NASA Technical Reports Server (NTRS)

    Sabatini, R. R.

    1971-01-01

    Brief description of the electronics and operation of the Nimbus image dissector camera system (IDCS). The geometry and distortions of the IDCS are compared to the conventional AVCS camera on board the operational ITOS and ESSA satellites. The unique scanning of the IDCS provides for little distortion of the image, making it feasible to use a strip grid for the IDCS received in real time by local APT stations. The dynamic range of the camera favors the white end (high reflectance) of the gray scale. Thus, the camera is good for detecting cloud structure and ice features through brightness changes. Examples of cloud features, ice, and snow-covered land are presented. Land features, on the other hand, show little contrast. The 2600 x 2600 km coverage by the IDCS is adequate for the early detection of weather systems which may affect the local area. An example of IDCS coverage obtained by an APT station in midlatitudes is presented.

  4. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  5. Microchannel plate pinhole camera for 20 to 100 keV x-ray imaging

    SciTech Connect

    Wang, C.L.; Leipelt, G.R.; Nilson, D.G.

    1984-10-03

    We present the design and construction of a sensitive pinhole camera for imaging suprathermal x-rays. Our device is a pinhole camera consisting of four filtered pinholes and microchannel plate electron multiplier for x-ray detection and signal amplification. We report successful imaging of 20, 45, 70, and 100 keV x-ray emissions from the fusion targets at our Novette laser facility. Such imaging reveals features of the transport of hot electrons and provides views deep inside the target.

  6. A Compton camera for spectroscopic imaging from 100 keV to 1 MeV

    SciTech Connect

    Earnhart, J.R.D.

    1998-12-31

    A review of spectroscopic imaging issues, applications, and technology is presented. Compton cameras based on solid state semiconductor detectors stands out as the best system for the nondestructive assay of special nuclear materials. A camera for this application has been designed based on an efficient specific purpose Monte Carlo code developed for this project. Preliminary experiments have been performed which demonstrate the validity of the Compton camera concept and the accuracy of the code. Based on these results, a portable prototype system is in development. Proposed future work is addressed.

  7. Use of an infrared-imaging camera to obtain convective heating distributions.

    NASA Technical Reports Server (NTRS)

    Compton, D. L.

    1972-01-01

    The IR emission from the surface of a wind-tunnel model is determined as a function of time with the aid of an infrared-sensitive imaging camera. Prior calibration of the IR camera relates the emission to the surface temperature of the model. The time history of the surface temperature is then related to the heating rate by standard techniques. The output of the camera is recorded in analog form, digitized, and processed by a computer. In addition, real-time visual displays of the IR emissions are obtained as pictures on an oscilloscope screen.

  8. Robust extraction of image correspondences exploiting the image scene geometry and approximate camera orientation

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Remondino, F.; Menna, F.; Gerke, M.; Vosselman, G.

    2013-02-01

    Image-based modeling techniques are an important tool for producing 3D models in a practical and cost effective manner. Accurate image-based models can be created as long as one can retrieve precise image calibration and orientation information which is nowadays performed automatically in computer vision and photogrammetry. The first step for orientation is to have sufficient correspondences across the captured images. Keypoint descriptors like SIFT or SURF are a successful approach for finding these correspondences. The extraction of precise image correspondences is crucial for the subsequent image orientation and image matching steps. Indeed there are still many challenges especially with wide-baseline image configuration. After the extraction of a sufficient and reliable set of image correspondences, a bundle adjustment is used to retrieve the image orientation parameters. In this paper, a brief description of our previous work on automatic camera network design is initially reported. This semi-automatic procedure results in wide-baseline high resolution images covering an object of interest, and including approximations of image orientations, a rough 3D object geometry and a matching matrix indicating for each image its matching mates. The main part of this paper will describe the subsequent image matching where the pre-knowledge on the image orientations and the pre-created rough 3D model of the study object is exploited. Ultimately the matching information retrieved during that step will be used for a precise bundle block adjustment. Since we defined the initial image orientation in the design of the network, we can compute the matching matrix prior to image matching of high resolution images. For each image involved in several pairs that is defined in the matching matrix, we detect the corners or keypoints and then transform them into the matching images by using the designed orientation and initial 3D model. Moreover, a window is defined for each corner and its initial correspondence in the matching images. A SIFT or SURF matching is implemented between every matching window to find the homologous points. This is followed by Least Square Matching LSM to refine the correspondences for a sub-pixel localization and to avoid inaccurate matches. Image matching is followed by a bundle adjustment to orient the images automatically to finally have a sparse 3D model. We used the commercial software Photomodeler Scanner 2010 for implementing the bundle adjustment since it reports a number of accuracy indices which are necessary for the evaluation purposes. The experimental test of comparing the automated image matching of four pre-designed streopairs shows that our approach can provide a high accuracy and effective orientation when compared to the results of commercial and open source software which does not exploit the pre-knowledge about the scene.

  9. A high-resolution airborne four-camera imaging system for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  10. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  11. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    SciTech Connect

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrance, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-10-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect the vacuum vessel internal structures in both the visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diam fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5/sup 0/, 20/sup 0/, and 60/sup 0/ field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35-mm Nikon F3 still camera, or (5) a 16-mm Locam II movie camera with variable framing rate up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented.

  12. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    SciTech Connect

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5/sup 0/, 20/sup 0/, and 60/sup 0/ field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented.

  13. Intial synchroscan streak camera imaging at the A0 photoinjector

    SciTech Connect

    Lumpkin, A.H.; Ruan, J.; /Fermilab

    2008-04-01

    At the Fermilab A0 photoinjector facility, bunch-length measurements of the laser micropulse and the e-beam micropulse have been done in the past with a single-sweep module of the Hamamatsu C5680 streak camera with an intrinsic shot-to-shot trigger jitter of 10 to 20 ps. We have upgraded the camera system with the synchroscan module tuned to 81.25 MHz to provide synchronous summing capability with less than 1.5-ps FWHM trigger jitter and a phase-locked delay box to provide phase stability of {approx}1 ps over 10s of minutes. This allowed us to measure both the UV laser pulse train at 244 nm and the e-beam via optical transition radiation (OTR). Due to the low electron beam energies and OTR signals, we typically summed over 50 micropulses with 1 nC per micropulse. We also did electron beam bunch length vs. micropulse charge measurements to identify a significant e-beam micropulse elongation from 10 to 30 ps (FWHM) for charges from 1 to 4.6 nC. This effect is attributed to space-charge effects in the PC gun as reproduced by ASTRA calculations. Chromatic temporal dispersion effects in the optics were also characterized and will be reported.

  14. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  15. Demonstration of three-dimensional imaging based on handheld Compton camera

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Taya, T.; Kabuki, S.

    2015-11-01

    Compton cameras are potential detectors that are capable of performing measurements across a wide energy range for medical imaging applications, such as in nuclear medicine and ion beam therapy. In previous work, we developed a handheld Compton camera to identify environmental radiation hotspots. This camera consists of a 3D position-sensitive scintillator array and multi-pixel photon counter arrays. In this work, we reconstructed the 3D image of a source via list-mode maximum likelihood expectation maximization and demonstrated the imaging performance of the handheld Compton camera. Based on both the simulation and the experiments, we confirmed that multi-angle data acquisition of the imaging region significantly improved the spatial resolution of the reconstructed image in the direction vertical to the detector. The experimental spatial resolutions in the X, Y, and Z directions at the center of the imaging region were 6.81 mm ± 0.13 mm, 6.52 mm ± 0.07 mm and 6.71 mm ± 0.11 mm (FWHM), respectively. Results of multi-angle data acquisition show the potential of reconstructing 3D source images.

  16. Methods for a fusion of optical coherence tomography and stereo camera image data

    NASA Astrophysics Data System (ADS)

    Bergmeier, Jan; Kundrat, Dennis; Schoob, Andreas; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-03-01

    This work investigates combination of Optical Coherence Tomography and two cameras, observing a microscopic scene. Stereo vision provides realistic images, but is limited in terms of penetration depth. Optical Coherence Tomography (OCT) enables access to subcutaneous structures, but 3D-OCT volume data do not give the surgeon a familiar view. The extension of the stereo camera setup with OCT imaging combines the benefits of both modalities. In order to provide the surgeon with a convenient integration of OCT into the vision interface, we present an automated image processing analysis of OCT and stereo camera data as well as combined imaging as augmented reality visualization. Therefore, we care about OCT image noise, perform segmentation as well as develop proper registration objects and methods. The registration between stereo camera and OCT results in a Root Mean Square error of 284 μm as average of five measurements. The presented methods are fundamental for fusion of both imaging modalities. Augmented reality is shown as application of the results. Further developments lead to fused visualization of subcutaneous structures, as information of OCT images, into stereo vision.

  17. Enhancing swimming pool safety by the use of range-imaging cameras

    NASA Astrophysics Data System (ADS)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  18. Imaging performance of a multiwire proportional-chamber positron camera

    SciTech Connect

    Perez-Mandez, V.; Del Guerra, A.; Nelson, W.R.; Tam, K.C.

    1982-08-01

    A new design - fully three-dimensional - Positron Camera is presented, made of six MultiWire Proportional Chamber modules arranged to form the lateral surface of a hexagonal prism. A true coincidence rate of 56000 c/s is expected with an equal accidental rate for a 400 ..mu..Ci activity uniformly distributed in a approx. 3 l water phantom. A detailed Monte Carlo program has been used to investigate the dependence of the spatial resolution on the geometrical and physical parameters. A spatial resolution of 4.8 mm FWHM has been obtained for a /sup 18/F point-like source in a 10 cm radius water phantom. The main properties of the limited angle reconstruction algorithms are described in relation to the proposed detector geometry.

  19. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  20. Driving micro-optical imaging systems towards miniature camera applications

    NASA Astrophysics Data System (ADS)

    Brückner, Andreas; Duparré, Jacques; Dannberg, Peter; Leitel, Robert; Bräuer, Andreas

    2010-05-01

    Up to now, multi channel imaging systems have been increasingly studied and approached from various directions in the academic domain due to their promising large field of view at small system thickness. However, specific drawbacks of each of the solutions prevented the diffusion into corresponding markets so far. Most severe problems are a low image resolution and a low sensitivity compared to a conventional single aperture lens besides the lack of a cost-efficient method of fabrication and assembly. We propose a microoptical approach to ultra-compact optics for real-time vision systems that are inspired by the compound eyes of insects. The demonstrated modules achieve a VGA resolution with 700x550 pixels within an optical package of 6.8mm x 5.2mm and a total track length of 1.4mm. The partial images that are separately recorded within different optical channels are stitched together to form a final image of the whole field of view by means of image processing. These software tools allow to correct the distortion of the individual partial images so that the final image is also free of distortion. The so-called electronic cluster eyes are realized by state-of-the-art microoptical fabrication techniques and offer a resolution and sensitivity potential that makes them suitable for consumer, machine vision and medical imaging applications.

  1. A 5-18 micron array camera for high-background astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.; Folz, Walter C.; Woods, Lawrence A.; Varosi, Frank

    1992-01-01

    A new infrared array camera system using a Hughes/SBRC 58 x 62 pixel hybrid Si:Ga array detector has been successfully applied to high-background 5-18-micron astronomical imaging observations. The off-axis reflective optical system minimizes thermal background loading and produces diffraction-limited images with negligible spatial distortion. The noise equivalent flux density (NEFD) of the camera at 10 microns on the 3.0-m NASA/Infrared Telescope Facility with broadband interference filters and 0.26 arcsec pixel is NEFD = 0.01 Jy/sq rt min per pixel (1sigma), and it operates at a frame rate of 30 Hz with no compromise in observational efficiency. The electronic and optical design of the camera, its photometric characteristics, examples of observational results, and techniques for successful array imaging in a high- background astronomical application are discussed.

  2. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  3. A novel IR polarization imaging system designed by a four-camera array

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Shao, Xiaopeng; Han, Pingli

    2014-05-01

    A novel IR polarization staring imaging system employing a four-camera-array is designed for target detection and recognition, especially man-made targets hidden in complex battle field. The design bases on the existence of the difference in infrared radiation's polarization characteristics, which is particularly remarkable between artificial objects and the natural environment. The system designed employs four cameras simultaneously to capture the00 polarization difference to replace the commonly used systems engaging only one camera. Since both types of systems have to obtain intensity images in four different directions (I0 , I45 , I90 , I-45 ), the four-camera design allows better real-time capability and lower error without the mechanical rotating parts which is essential to one-camera systems. Information extraction and detailed analysis demonstrate that the caught polarization images include valuable polarization information which can effectively increase the images' contrast and make it easier to segment the target even the hidden target from various scenes.

  4. A new approach to tunnel image acquisition using a fisheye lens camera

    NASA Astrophysics Data System (ADS)

    Kim, Gihong; Youn, Junhee; Choi, Hyun Sang; Chae, Myung Jin

    2013-09-01

    In this study, an improved method of acquiring images of tunnels using a fisheye lens is presented. First, a forward image of the inside of a tunnel is obtained using a fisheye lens camera. A portion of this image is then transformed to a rectangular image using a coordinate transformation. To verify this proposed method, the technique was applied to images of an actual tunnel under construction. Although the transformed images obtained in this test were distorted, it is possible to improve the applicability of this method by correcting such distortions.

  5. Vehicle occupancy detection camera position optimization using design of experiments and standard image references

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Hoover, Martin; Rabbani, Mojgan

    2013-03-01

    Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of the vehicle. The DOE-SIR method was exercised for determining the optimal camera position and orientation for viewing vehicle rear seats over a variety of vehicle types. The resulting camera geometry was used on public roadway image capture resulting in over 95% acceptable rear seat images for human viewing.

  6. Effect of camera temperature variations on stereo-digital image correlation measurements.

    PubMed

    Pan, Bing; Shi, Wentao; Lubineau, Gilles

    2015-12-01

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30-50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested. PMID:26836665

  7. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  8. Camera motion tracking of real bronchoscope using epipolar geometry analysis and CT-derived bronchoscopic images

    NASA Astrophysics Data System (ADS)

    Deguchi, Daisuke; Mori, Kensaku; Hasegawa, Jun-ichi; Toriwaki, Jun-ichiro; Takabatake, Hirotsugu; Natori, Hiroshi

    2002-04-01

    This paper describes a method to track camera motion of a real endoscope by using epipolar geometry analysis and CT derived virtual endoscopic images. A navigation system for a flexible endoscope guides medical doctors by providing navigation information during endoscope examinations. This paper tries to estimate the motion from an endoscopic video image based on epipolar geometry analysis and image registration between virtual endoscopic (VE) and real endoscopic (RE) images. The method consists of three parts: (a) direct estimation of camera motion by using epipolar geometry analysis, (b) precise estimation by using image registration, and (c) detection of bubble frames for avoiding miss-registration. First we calculate optical flow patterns from two consecutive frames. The camera motion is computed by substituting the obtained flows into the epipolar equations. Then we find the observation parameter of a virtual endoscopy system that generates the most similar endoscopic view to the current RE frame. We execute these processes for all frames of RE videos except for frames where bubbles appear. We applied the proposed method to RE videos of three patients who have CT images. The experimental results show the method can track camera motion for over 500 frames continuously in the best case.

  9. Calibration and adaptation of ISO visual noise for I3A's Camera Phone Image Quality initiative

    NASA Astrophysics Data System (ADS)

    Baxter, Donald J.; Murray, Andrew

    2012-01-01

    The I3A Camera Phone Image Quality (CPIQ) visual noise metric described is a core image quality attribute of the wider I3A CPIQ consumer orientated, camera image quality score. This paper describes the selection of a suitable noise metric, the adaptation of the chosen ISO 15739 visual noise protocol for the challenges posed by cell phone cameras and the mapping of the adapted protocol to subjective image quality loss using a published noise study. Via a simple study, visual noise metrics are shown to discriminate between different noise frequency shapes. The optical non-uniformities prevalent in cell phone cameras and higher noise levels pose significant challenges to the ISO 15739 visual noise protocol. The non-uniformities are addressed using a frequency based high pass filter. Secondly, the data clipping at high noise levels is avoided using a Johnson and Fairchild frequency based Luminance contrast sensitivity function (CSF). The final result is a visually based noise metric calibrated in Quality Loss Just Noticeable Differences (JND) using Aptina Imaging's subjectively calibrated image set.

  10. Correction of rolling wheel images captured by a linear array camera.

    PubMed

    Xu, Jiayuan; Sun, Ran; Tian, Yupeng; Xie, Qi; Yang, Ying; Liu, Hongdan; Cao, Lei

    2015-11-20

    As a critical part of the train, wheels affect railway transport security to a large extent. This paper introduces an online method to detect the wheel tread of a train. The wheel tread images are collected by industrial linear array charge coupled device (CCD) cameras when the train is moving at a low velocity. This study defines the positioning of the cameras and determines how to select other parameters such as the horizontal angle and the scanning range. The deformation of the wheel tread image can be calculated based on these parameters and corrected by gray interpolation. PMID:26836530

  11. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  12. Coded-Aperture Compton Camera for Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.; Williams, John G.

    2016-02-01

    A novel gamma-ray imaging system is demonstrated, by means of Monte Carlo simulation. Previous designs have used either a coded aperture or Compton scattering system to image a gamma-ray source. By taking advantage of characteristics of each of these systems a new design can be implemented that does not require a pixelated stopping detector. Use of the system is illustrated for a simulated radiation survey in a decontamination and decommissioning operation.

  13. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    NASA Astrophysics Data System (ADS)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  14. Online gamma-camera imaging of 103Pd seeds (OGIPS) for permanent breast seed implantation

    NASA Astrophysics Data System (ADS)

    Ravi, Ananth; Caldwell, Curtis B.; Keller, Brian M.; Reznik, Alla; Pignol, Jean-Philippe

    2007-09-01

    Permanent brachytherapy seed implantation is being investigated as a mode of accelerated partial breast irradiation for early stage breast cancer patients. Currently, the seeds are poorly visualized during the procedure making it difficult to perform a real-time correction of the implantation if required. The objective was to determine if a customized gamma-camera can accurately localize the seeds during implantation. Monte Carlo simulations of a CZT based gamma-camera were used to assess whether images of suitable quality could be derived by detecting the 21 keV photons emitted from 74 MBq 103Pd brachytherapy seeds. A hexagonal parallel hole collimator with a hole length of 38 mm, hole diameter of 1.2 mm and 0.2 mm septa, was modeled. The design of the gamma-camera was evaluated on a realistic model of the breast and three layers of the seed distribution (55 seeds) based on a pre-implantation CT treatment plan. The Monte Carlo simulations showed that the gamma-camera was able to localize the seeds with a maximum error of 2.0 mm, using only two views and 20 s of imaging. A gamma-camera can potentially be used as an intra-procedural image guidance system for quality assurance for permanent breast seed implantation.

  15. A mobile phone-based retinal camera for portable wide field imaging.

    PubMed

    Maamari, Robi N; Keenan, Jeremy D; Fletcher, Daniel A; Margolis, Todd P

    2014-04-01

    Digital fundus imaging is used extensively in the diagnosis, monitoring and management of many retinal diseases. Access to fundus photography is often limited by patient morbidity, high equipment cost and shortage of trained personnel. Advancements in telemedicine methods and the development of portable fundus cameras have increased the accessibility of retinal imaging, but most of these approaches rely on separate computers for viewing and transmission of fundus images. We describe a novel portable handheld smartphone-based retinal camera capable of capturing high-quality, wide field fundus images. The use of the mobile phone platform creates a fully embedded system capable of acquisition, storage and analysis of fundus images that can be directly transmitted from the phone via the wireless telecommunication system for remote evaluation. PMID:24344230

  16. Binarization method based on evolution equation for document images produced by cameras

    NASA Astrophysics Data System (ADS)

    Wang, Yan; He, Chuanjiang

    2012-04-01

    We present an evolution equation-based binarization method for document images produced by cameras. Unlike the existing thresholding techniques, the idea behind our method is that a family of gradually binarized images is obtained by the solution of an evolution partial differential equation, starting with an original image. In our formulation, the evolution is controlled by a global force and a local force, both of which have opposite sign inside and outside the object of interests in the original image. A simple finite difference scheme with a significantly larger time step is used to solve the evolution equation numerically; the desired binarization is typically obtained after only one or two iterations. Experimental results on 122 camera document images show that our method yields good visual quality and OCR performance.

  17. In vitro near-infrared imaging of occlusal dental caries using germanium enhanced CMOS camera

    PubMed Central

    Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel

    2011-01-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity. PMID:22162916

  18. Shading correction of camera captured document image with depth map information

    NASA Astrophysics Data System (ADS)

    Wu, Chyuan-Tyng; Allebach, Jan P.

    2015-01-01

    Camera modules have become more popular in consumer electronics and office products. As a consequence, people have many opportunities to use a camera-based device to record a hardcopy document in their daily lives. However, it is easy to let undesired shading into the captured document image through the camera. Sometimes, this non-uniformity may degrade the readability of the contents. In order to mitigate this artifact, some solutions have been developed. But most of them are only suitable for particular types of documents. In this paper, we introduce a content-independent and shape-independent method that will lessen the shading effects in captured document images. We want to reconstruct the image such that the result will look like a document image captured under a uniform lighting source. Our method utilizes the 3D depth map of the document surface and a look-up table strategy. We will first discuss the model and the assumptions that we used for the approach. Then, the process of creating and utilizing the look-up table will be described in the paper. We implement this algorithm with our prototype 3D scanner, which also uses a camera module to capture a 2D image of the object. Some experimental results will be presented to show the effectiveness of our method. Both flat and curved surface document examples will be included.

  19. The image pretreatment based on the FPGA inside digital CCD camera

    NASA Astrophysics Data System (ADS)

    Tian, Rui; Liu, Yan-ying

    2009-07-01

    In a space project, a digital CCD camera which can image more clearly in the 1 Lux light environment has been asked to design . The CCD sensor ICX285AL produced by SONY Co.Ltd has been used in the CCD camera. The FPGA (Field Programmable Gate Array) chip XQR2V1000 has been used as a timing generator and a signal processor inside the CCD camera. But in the low-light environment, two kinds of random noise become apparent because of the improving of CCD camera's variable gain, one is dark current noise in the image background, the other is vertical transfer noise. The real time method for eliminating noise based on FPGA inside the CCD camera would be introduced. The causes and characteristics of the random noise have been analyzed. First, several ideas for eliminating dark current noise had been motioned; then they were emulated by VC++ in order to compare their speed and effect; Gauss filter has been chosen because of the filtering effect. The vertical transfer vertical noise has the character that the vertical noise points have regular ordinate in the image two-dimensional coordinates; and the performance of the noise is fixed, the gray value of the noise points is 16-20 less than the surrounding pixels. According to these characters, local median filter has been used to clear up the vertical noise. Finally, these algorithms had been transplanted into the FPGA chip inside the CCD camera. A large number of experiments had proved that the pretreatment has better real-time features. The pretreatment makes the digital CCD camera improve the signal-to-noise ratio of 3-5dB in the low-light environment.

  20. GNSS Carrier Phase Integer Ambiguity Resolution with Camera and Satellite images

    NASA Astrophysics Data System (ADS)

    Henkel, Patrick

    2015-04-01

    Ambiguity Resolution is the key to high precision position and attitude determination with GNSS. However, ambiguity resolution of kinematic receivers becomes challenging in environments with substantial multipath, limited satellite availability and erroneous cycle slip corrections. There is a need for other sensors, e.g. inertial sensors that allow an independent prediction of the position. The change of the predicted position over time can then be used for cycle slip detection and correction. In this paper, we provide a method to improve the initial ambiguity resolution for RTK and PPP with vision-based position information. Camera images are correlated with geo-referenced aerial/ satellite images to obtain an independent absolute position information. This absolute position information is then coupled with the GNSS and INS measurements in an extended Kalman filter to estimate the position, velocity, acceleration, attitude, angular rates, code multipath and biases of the accelerometers and gyroscopes. The camera and satellite images are matched based on some characteristic image points (e.g. corners of street markers). We extract these characteristic image points from the camera images by performing the following steps: An inverse mapping (homogenous projection) is applied to transform the camera images from the driver's perspective to bird view. Subsequently, we detect the street markers by performing (a) a color transformation and reduction with adaptive brightness correction to focus on relevant features, (b) a subsequent morphological operation to enhance the structure recognition, (c) an edge and corner detection to extract feature points, and (d) a point matching of the corner points with a template to recognize the street markers. We verified the proposed method with two low-cost u-blox LEA 6T GPS receivers, the MPU9150 from Invensense, the ASCOS RTK corrections and a PointGrey camera. The results show very precise and seamless position and attitude estimates in an urban environment with substantial multipath.

  1. Image analysis techniques to estimate river discharge using time-lapse cameras in remote locations

    NASA Astrophysics Data System (ADS)

    Young, David S.; Hart, Jane K.; Martinez, Kirk

    2015-03-01

    Cameras have the potential to provide new data streams for environmental science. Improvements in image quality, power consumption and image processing algorithms mean that it is now possible to test camera-based sensing in real-world scenarios. This paper presents an 8-month trial of a camera to monitor discharge in a glacial river, in a situation where this would be difficult to achieve using methods requiring sensors in or close to the river, or human intervention during the measurement period. The results indicate diurnal changes in discharge throughout the year, the importance of subglacial winter water storage, and rapid switching from a "distributed" winter system to a "channelised" summer drainage system in May. They show that discharge changes can be measured with an accuracy that is useful for understanding the relationship between glacier dynamics and flow rates.

  2. MTF measurement and imaging quality evaluation of digital camera with slanted-edge method

    NASA Astrophysics Data System (ADS)

    Xiang, Chunchang; Chen, Xinhua; Chen, Yuheng; Zhou, Jiankang; Shen, Weimin

    2010-11-01

    Modulation Transfer Function (MTF) is the spatial frequency response of imaging systems and now develops as an objective merit performance for evaluating both quality of lens and camera. Slanted-edge method and its principle for measuring MTF of digital camera are introduced in this paper. The setup and software for testing digital camera is respectively established and developed. Measurement results with different tilt angle of the knife edge are compared to discuss the influence of the tilt angle. Also carefully denoise of the knife edge image is performed to decrease the noise sensitivity of knife edge measurement. Comparisons have been made between the testing results gained by slanted-edge method and grating target technique, and their deviation is analyzed.

  3. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  4. Gamma camera calibration and validation for quantitative SPECT imaging with (177)Lu.

    PubMed

    D'Arienzo, M; Cazzato, M; Cozzella, M L; Cox, M; D'Andrea, M; Fazio, A; Fenwick, A; Iaccarino, G; Johansson, L; Strigari, L; Ungania, S; De Felice, P

    2016-06-01

    Over the last years (177)Lu has received considerable attention from the clinical nuclear medicine community thanks to its wide range of applications in molecular radiotherapy, especially in peptide-receptor radionuclide therapy (PRRT). In addition to short-range beta particles, (177)Lu emits low energy gamma radiation of 113keV and 208keV that allows gamma camera quantitative imaging. Despite quantitative cancer imaging in molecular radiotherapy having been proven to be a key instrument for the assessment of therapeutic response, at present no general clinically accepted quantitative imaging protocol exists and absolute quantification studies are usually based on individual initiatives. The aim of this work was to develop and evaluate an approach to gamma camera calibration for absolute quantification in tomographic imaging with (177)Lu. We assessed the gamma camera calibration factors for a Philips IRIX and Philips AXIS gamma camera system using various reference geometries, both in air and in water. Images were corrected for the major effects that contribute to image degradation, i.e. attenuation, scatter and dead- time. We validated our method in non-reference geometry using an anthropomorphic torso phantom provided with the liver cavity uniformly filled with (177)LuCl3. Our results showed that calibration factors depend on the particular reference condition. In general, acquisitions performed with the IRIX gamma camera provided good results at 208keV, with agreement within 5% for all geometries. The use of a Jaszczak 16mL hollow sphere in water provided calibration factors capable of recovering the activity in anthropomorphic geometry within 1% for the 208keV peak, for both gamma cameras. The point source provided the poorest results, most likely because scatter and attenuation correction are not incorporated in the calibration factor. However, for both gamma cameras all geometries provided calibration factors capable of recovering the activity in anthropomorphic geometry within about 10% (range -11.6% to +7.3%) for acquisitions at the 208keV photopeak. As a general rule, scatter and attenuation play a much larger role at 113keV compared to 208keV and are likely to hinder an accurate absolute quantification. Acquisitions of only the (177)Lu main photopeak (208keV) are therefore recommended in clinical practice. Preliminary results suggest that the gamma camera calibration factor can be assessed with a standard uncertainty below (or of the order of) 3% if activity is determined with equipment traceable to primary standards, accurate volume measurements are made, and an appropriate chemical carrier is used to allow a homogeneous and stable solution to be used during the measurements. PMID:27064195

  5. The application of camera calibration in range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is selected. One-to-one correspondence between visual filed and focal length of system is obtained and offers effective visual field information for the matching of imaging filed and illumination filed in range-gated 3-D imaging technology. On the basis of the experimental results, combined with the depth of field theory, the application of camera calibration in range-gated 3-D imaging technology is futher studied.

  6. An interactive camera placement and visibility simulator for image-based VR applications

    NASA Astrophysics Data System (ADS)

    State, Andrei; Welch, Greg; Ilie, Adrian

    2006-02-01

    We describe an interactive software simulator that assists with the design of multi-camera setups for applications such as image-based virtual reality, three-dimensional reconstruction from still or video imagery, surveillance, etc. Instead of automating the camera placement process, our goal is to assist a user by means of a simulator that supports interactive placement and manipulation of multiple cameras within a pre-modeled three-dimensional environment. It provides a real-time 3D rendering of the environment, depicting the exact coverage of each camera (including indications of occluded and overlap regions) and the effective spatial resolution on the surfaces. The simulator can also indicate the dynamic coverage of pan-tilt-zoom cameras using "traces" to highlight areas that are reachable within a user-selectable interval. We describe the simulator, its underlying "engine" and its interface, and we show an example multi-camera setup for remote 3D medical consultation, including preliminary 3D reconstruction results.

  7. SAMI: the SCAO module for the E-ELT adaptive optics imaging camera MICADO

    NASA Astrophysics Data System (ADS)

    Clénet, Y.; Bernardi, P.; Chapron, F.; Gendron, E.; Rousset, G.; Hubert, Z.; Davies, R.; Thiel, M.; Tromp, N.; Genzel, R.

    2010-07-01

    SAMI, the SCAO module for the E-ELT adaptive optics imaging camera MICADO, could be used in the first years of operation of MICADO on the telescope, until MAORY is operational and coupled to MICADO. We present the results of the study made in the framework of the MICADO phase A to design and estimate the performance of this SCAO module.

  8. Geologic Analysis of the Surface Thermal Emission Images Taken by the VMC Camera, Venus Express

    NASA Astrophysics Data System (ADS)

    Basilevsky, A. T.; Shalygin, E. V.; Titov, D. V.; Markiewicz, W. J.; Scholten, F.; Roatsch, Th.; Fiethe, B.; Osterloh, B.; Michalik, H.; Kreslavsky, M. A.; Moroz, L. V.

    2010-03-01

    Analysis of Venus Monitoring Camera 1-µm images and surface emission modeling showed apparent emissivity at Chimon-mana tessera and shows that Tuulikki volcano is higher than that of the adjacent plains; Maat Mons did not show any signature of ongoing volcanism.

  9. A multiple-plate, multiple-pinhole camera for X-ray gamma-ray imaging

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.

    1971-01-01

    Plates with identical patterns of precisely aligned pinholes constitute lens system which, when rotated about optical axis, produces continuous high resolution image of small energy X-ray or gamma ray source. Camera has applications in radiation treatment and nuclear medicine.

  10. Hyperspectral imaging using a color camera and its application for pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  11. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Astrophysics Data System (ADS)

    Friedman, Gary L.

    1994-02-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  12. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1994-01-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  13. The reconstruction of digital terrain model using panoramic camera images of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Wang, F. F.; Liu, J. J.; Ren, X.; Wang, W. R.; Mu, L. L.; Tan, X.; Li, C. L.

    2014-04-01

    The most direct and effective way of understanding the planetary topography and morphology is to build the accurate 3D planetary terrain model. Stereo images taken by the two panoramic cameras (PCAM), which were installed on the recently launched Change'E-3 rover, can be an optimal data source to assess the lunar landscape around the rover. This paper proposed a fast and efficient workflow to realtimely reconstruct the high-resolution 3D lunar terrain model, including the Digital Elevation Model (DEM) and Digital Orthophoto Map (DOM), using the PCAM stereo images. We found that the residual errors of coordinates in the adjacent images of the mosaiced DOM were within 2 pixels, and the distance deviation from the topographic data generated from the decent camera images was small. Thus, we concluded that this terrain model could satisfy the needs of identifying exploration targets and planning the rover traverse routes.

  14. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  15. The iQID camera: An ionizing-radiation quantum imaging detector

    PubMed Central

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2015-01-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector’s response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications. PMID:26166921

  16. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  17. The iQID camera: An ionizing-radiation quantum imaging detector

    NASA Astrophysics Data System (ADS)

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Bradford Barber, H.; Furenlid, Lars R.

    2014-12-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  18. High frame rate CCD cameras with fast optical shutters for military and medical imaging applications

    SciTech Connect

    King, N.S.P.; Albright, K.; Jaramillo, S.A.; McDonald, T.E.; Yates, G.J.; Turko, B.T.

    1994-09-01

    Los Alamos National Laboratory has designed and prototyped high-frame rate intensified/shuttered Charge-Coupled-Device (CCD) cameras capable of operating at kilohertz frame rates (non-interlaced mode) with optical shutters capable of acquiring nanosecond-to-microsecond exposures each frame. These cameras utilize an Interline Transfer CCD, Loral Fairchild CCD-222 with 244 {times} 380 pixels operated at pixel rates approaching 100 Mhz. Initial prototype designs demonstrated single-port serial readout rates exceeding 3.97 Kilohertz with greater than 51p/mm spatial resolution at shutter speeds as short as 5ns. Readout was achieved by using a truncated format of 128 {times} 128 pixels by partial masking of the CCD and then subclocking the array at approximately 65Mhz pixel rate. Shuttering was accomplished with a proximity focused microchannel plate (MCP) image intensifier (MCPII) that incorporated a high strip current MCP and a design modification for high-speed stripline gating geometry to provide both fast shuttering and high repetition rate capabilities. Later camera designs use a close-packed quadruple head geometry fabricated using an array of four separate CCDs (pseudo 4-port device). This design provides four video outputs with optional parallel or time-phased sequential readout modes. The quad head format was designed with flexibility for coupling to various image intensifier configurations, including individual intensifiers for each CCD imager, a single intensifier with fiber optic or lens/prism coupled fanout of the input image to be shared by the four CCD imagers or a large diameter phosphor screen of a gateable framing type intensifier for time sequential relaying of a complete new input image to each CCD imager. Camera designs and their potential use in ongoing military and medical time-resolved imaging applications are discussed.

  19. The iQID Camera An Ionizing-Radiation Quantum Imaging Detector

    SciTech Connect

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, Bradford H.; Furenlid, Lars R.

    2014-06-11

    Abstract We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. We present the latest results and discuss potential applications.

  20. Subtractive imaging in confocal scanning microscopy using a CCD camera as a detector.

    PubMed

    Sánchez-Ortiga, Emilio; Sheppard, Colin J R; Saavedra, Genaro; Martínez-Corral, Manuel; Doblas, Ana; Calatayud, Arnau

    2012-04-01

    We report a scheme for the detector system of confocal microscopes in which the pinhole and a large-area detector are substituted by a CCD camera. The numerical integration of the intensities acquired by the active pixels emulates the signal passing through the pinhole. We demonstrate the imaging capability and the optical sectioning of the system. Subtractive-imaging confocal microscopy can be implemented in a simple manner, providing superresolution and improving optical sectioning. PMID:22466221

  1. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  2. Preliminary experience with small animal SPECT imaging on clinical gamma cameras.

    PubMed

    Aguiar, P; Silva-Rodríguez, J; Herranz, M; Ruibal, A

    2014-01-01

    The traditional lack of techniques suitable for in vivo imaging has induced a great interest in molecular imaging for preclinical research. Nevertheless, its use spreads slowly due to the difficulties in justifying the high cost of the current dedicated preclinical scanners. An alternative for lowering the costs is to repurpose old clinical gamma cameras to be used for preclinical imaging. In this paper we assess the performance of a portable device, that is, working coupled to a single-head clinical gamma camera, and we present our preliminary experience in several small animal applications. Our findings, based on phantom experiments and animal studies, provided an image quality, in terms of contrast-noise trade-off, comparable to dedicated preclinical pinhole-based scanners. We feel that our portable device offers an opportunity for recycling the widespread availability of clinical gamma cameras in nuclear medicine departments to be used in small animal SPECT imaging and we hope that it can contribute to spreading the use of preclinical imaging within institutions on tight budgets. PMID:24963478

  3. Real-time full-field photoacoustic imaging using an ultrasonic camera

    NASA Astrophysics Data System (ADS)

    Balogun, Oluwaseyi; Regez, Brad; Zhang, Hao F.; Krishnaswamy, Sridhar

    2010-03-01

    A photoacoustic imaging system that incorporates a commercial ultrasonic camera for real-time imaging of two-dimensional (2-D) projection planes in tissue at video rate (30 Hz) is presented. The system uses a Q-switched frequency-doubled Nd:YAG pulsed laser for photoacoustic generation. The ultrasonic camera consists of a 2-D 12×12 mm CCD chip with 120×120 piezoelectric sensing elements used for detecting the photoacoustic pressure distribution radiated from the target. An ultrasonic lens system is placed in front of the chip to collect the incoming photoacoustic waves, providing the ability for focusing and imaging at different depths. Compared with other existing photoacoustic imaging techniques, the camera-based system is attractive because it is relatively inexpensive and compact, and it can be tailored for real-time clinical imaging applications. Experimental results detailing the real-time photoacoustic imaging of rubber strings and buried absorbing targets in chicken breast tissue are presented, and the spatial resolution of the system is quantified.

  4. High-speed camera with real time processing for frequency domain imaging

    PubMed Central

    Shia, Victor; Watt, David; Faris, Gregory W.

    2011-01-01

    We describe a high-speed camera system for frequency domain imaging suitable for applications such as in vivo diffuse optical imaging and fluorescence lifetime imaging. 14-bit images are acquired at 2 gigapixels per second and analyzed with real-time pipeline processing using field programmable gate arrays (FPGAs). Performance of the camera system has been tested both for RF-modulated laser imaging in combination with a gain-modulated image intensifier and a simpler system based upon an LED light source. System amplitude and phase noise are measured and compared against theoretical expressions in the shot noise limit presented for different frequency domain configurations. We show the camera itself is capable of shot noise limited performance for amplitude and phase in as little as 3 ms, and when used in combination with the intensifier the noise levels are nearly shot noise limited. The best phase noise in a single pixel is 0.04 degrees for a 1 s integration time. PMID:21750770

  5. Evaluation of Correction Methods of Chromatic Aberration in Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.; Asonuma, K.; Takahashi, G.; Danjo, T.; Hirana, K.

    2012-07-01

    This paper reports an experiment conducted to evaluate correction methods of chromatic aberrations in images acquired by a nonmetric digital camera. The chromatic aberration correction methods evaluated in the experiment are classified into two kinds. One is the method to correct image coordinates by using camera calibration results of color-separated images. The other is the method based on the assumption that the magnitude of chromatic aberrations can be expressed by a function of a radial distance from the center of an image frame. The former is classified further into five types according to the difference of orientation parameters common to all colors. The latter is classified further into three types according to the order of the correction function. We adopt a linear function, a quadratic function and a cubic function of the radial distance as a correction function. We utilize a set of 16 convergent images shooting a white sheet with 10 by 10 black filled circles to carry out camera calibration and estimate unknown coefficients in the correction function by means of least squares adjustment. We evaluate the chromatic aberration correction methods by using a normal image shooting a white sheet with 14 by 10 black filled circles. From the experiment results, we conclude that the method based on the assumption that the magnitude of chromatic aberrations can be expressed by a cubic function of the radial distance is the best method of the evaluated methods, and would be able to correct chromatic aberrations satisfactorily enough in many cases.

  6. Wide Field Camera 3: A Powerful New Imager for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Kimble, Randy

    2008-01-01

    Wide Field Camera 3 (WFC3) is a powerful UV/visible/near-infrared camera in development for installation into the Hubble Space Telescope during upcoming Servicing Mission 4. WFC3 provides two imaging channels. The UVIS channel incorporates a 4096 x 4096 pixel CCD focal plane with sensitivity from 200 to 1000 nm. The IR channel features a 1024 x 1024 pixel HgCdTe focal plane covering 850 to 1700 nm. We report here on the design of the instrument, the performance of its flight detectors, results of the ground test and calibration program, and the plans for the Servicing Mission installation and checkout.

  7. Development of a gas leak detection method based on infrared spectrum imaging utilizing microbolometer camera

    NASA Astrophysics Data System (ADS)

    Sakagami, Takahide; Anzai, Hiroaki; Kubo, Shiro

    2011-05-01

    Development of an early gas leak detection system is essential for safety of energy storage tank fields or chemical plants. Contact-type conventional gas sensors are not suitable for remote surveillance of gas leakage in wide area. Infrared camera has been utilized for gas leak detection, however it is limited only for detecting particular gas. In this study a gas leak identification system, which enables us to detect gas leakage and to identify gas type and density, is developed based on infrared spectrum imaging system utilizing low cost and compact microbolometer infrared camera. Feasibility of the proposed system was demonstrated by experimental results on identification of hydrofluorocarbon gas.

  8. Temporally consistent virtual camera generation from stereo image sequences

    NASA Astrophysics Data System (ADS)

    Fox, Simon R.; Flack, Julien; Shao, Juliang; Harman, Phil

    2004-05-01

    The recent emergence of auto-stereoscopic 3D viewing technologies has increased demand for the creation of 3D video content. A range of glasses-free multi-viewer screens have been developed that require as many as 9 views generated for each frame of video. This presents difficulties in both view generation and transmission bandwidth. This paper examines the use of stereo video capture as a means to generate multiple scene views via disparity analysis. A machine learning approach is applied to learn relationships between disparity generated depth information and source footage, and to generate depth information in a temporally smooth manner for both left and right eye image sequences. A view morphing approach to multiple view rendering is described which provides an excellent 3D effect on a range of glasses-free displays, while providing robustness to inaccurate stereo disparity calculations.

  9. MONICA: a compact, portable dual gamma camera system for mouse whole-body imaging

    SciTech Connect

    Choyke, Peter L.; Xia, Wenze; Seidel, Jurgen; Kakareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.

    2010-04-01

    Introduction We describe a compact, portable dual-gamma camera system (named "MONICA" for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed ?looking up? through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV?10%, yielded the following results: spatial resolution (FWHM at 1 cm), 2.2 mm; sensitivity, 149 cps (counts per seconds)/MBq (5.5 cps/μCi); energy resolution (FWHM, full width at half maximum), 10.8%; count rate linearity (count rate vs. activity), r2=0.99 for 0?185 MBq (0?5 mCi) in the field of view (FOV); spatial uniformity, <3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-min images acquired throughout the 168-h study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g., limited imaging space, portability and, potentially, cost are important.

  10. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-01

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested. PMID:23669962

  11. Suppression of bright objects using a spatial light modulator when imaging with CCD- and image intensified cameras

    NASA Astrophysics Data System (ADS)

    Groeder, Torbjoern

    1993-01-01

    The operation of CCD (Charge Coupled Devices) and intensified cameras is described with emphasis on the problems caused by local overexposure of the image sensor. In trying to reduce the amount of blooming in the image, a transmissive graphic Liquid Crystal Display (LCD) operating as a spatial light modulator was placed in front of the cameras, and the transmission of the display elements was dynamically controlled in real time from a Personal Computer (PC). By activating the LCD elements in front of strongly illuminated areas of the image sensor, improvement of the image quality was demonstrated. Various types of LCDs were studied in trying to learn which ones would be best suited for this particular application, and this work may suggest a new application area for LCD's.

  12. Response-initiated imaging of operant behavior using a digital camera.

    PubMed

    Iversen, Iver H

    2002-05-01

    A miniature digital camera, QuickCam Pro 3000, intended for use with video e-mail, was modified so that snapshots were triggered by operant behavior emitted in a standard experimental chamber. With only minor modification, the manual shutter button on the camera was replaced with a simple switch closure via an I/O interface controlled by a PC computer. When the operant behavior activated the I/O switch, the camera took a snapshot of the subject's behavior at that moment. To illustrate the use of the camera, a simple experiment was designed to examine stereotypy and variability in topography of operant behavior under continuous reinforcement and extinction in 6 rats using food pellets as reinforcement. When a rat operated an omnidirectional pole suspended from the ceiling, it also took a picture of the topography of its own behavior at that moment. In a single session after shaping of pole movement (if necessary), blocks of continuous reinforcement, in which each response was reinforced, alternated with blocks of extinction (no reinforcement), with each block ending when 20 responses had occurred. The software supplied with the camera automatically stored each image and named image files successively within a session. The software that controlled the experiment also stored quantitative data regarding the operant behavior such as consecutive order, temporal location within the session, and response duration. This paper describes how the two data types--image information and numerical performance characteristics-can be combined for visual analysis. The experiment illustrates in images how response topography changes during shaping of pole movement, how response topography quickly becomes highly stereotyped during continuous reinforcement, and how response variability increases during extinction. The method of storing digital response-initiated snapshots should be useful for a variety of experimental situations that are intended to examine behavior change and topography. PMID:12083681

  13. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube is gated off between exposures.

  14. A portable device for small animal SPECT imaging in clinical gamma-cameras

    NASA Astrophysics Data System (ADS)

    Aguiar, P.; Silva-Rodríguez, J.; González-Castaño, D. M.; Pino, F.; Sánchez, M.; Herranz, M.; Iglesias, A.; Lois, C.; Ruibal, A.

    2014-07-01

    Molecular imaging is reshaping clinical practice in the last decades, providing practitioners with non-invasive ways to obtain functional in-vivo information on a diversity of relevant biological processes. The use of molecular imaging techniques in preclinical research is equally beneficial, but spreads more slowly due to the difficulties to justify a costly investment dedicated only to animal scanning. An alternative for lowering the costs is to repurpose parts of old clinical scanners to build new preclinical ones. Following this trend, we have designed, built, and characterized the performance of a portable system that can be attached to a clinical gamma-camera to make a preclinical single photon emission computed tomography scanner. Our system offers an image quality comparable to commercial systems at a fraction of their cost, and can be used with any existing gamma-camera with just an adaptation of the reconstruction software.

  15. Individual camera identification using correlation of fixed pattern noise in image sensors.

    PubMed

    Kurosawa, Kenji; Kuroki, Kenro; Akiba, Norimitsu

    2009-05-01

    This paper presents results of experiments related to individual video camera identification using a correlation coefficient of fixed pattern noise (FPN) in image sensors. Five color charge-coupled device (CCD) modules of the same brand were examined. Images were captured using a 12-bit monochrome video capture board and stored in a personal computer. For each module, 100 frames were captured. They were integrated to obtain FPN. The results show that a specific CCD module was distinguished among the five modules by analyzing the normalized correlation coefficient. The temporal change of the correlation coefficient during several days had only a negligible effect on identifying the modules. Furthermore, a positive relation was found between the correlation coefficient of the same modules and the number of frames that were used for image integration. Consequently, precise individual camera identification is enhanced by acquisition of as many frames as possible. PMID:19302379

  16. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-07-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a hemispherical sky imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images, non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated using spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80°. The reconstructed spectra of the wavelengths 380-760 nm between both instruments at various directions deviate by less than 20% for all sky conditions.

  17. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-01-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a Hemispherical Sky Imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated by spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80°. The reconstructed spectra of the wavelength 380 nm to 760 nm between both instruments at various directions deviate by less then 20% for all sky conditions.

  18. Measuring the image quality of digital-camera sensors by a ping-pong ball

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Rubiño, Manuel; Castro, José J.; Salas, Carlos; Pérez-Ocón, Francisco

    2014-07-01

    In this work, we present a low-cost experimental setup to evaluate the image quality of digital-camera sensors, which can be implemented in undergraduate and postgraduate teaching. The method consists of evaluating the modulation transfer function (MTF) of digital-camera sensors by speckle patterns using a ping-pong ball as a diffuser, with two handmade circular apertures acting as input and output ports, respectively. To specify the spatial-frequency content of the speckle pattern, it is necessary to use an aperture; for this, we made a slit in a piece of black cardboard. First, the MTF of a digital-camera sensor was calculated using the ping-pong ball and the handmade slit, and then the MTF was calculated using an integrating sphere and a high-quality steel slit. Finally, the results achieved with both experimental setups were compared, showing a similar MTF in both cases.

  19. Adaptive optics flood-illumination camera for high speed retinal imaging

    NASA Astrophysics Data System (ADS)

    Rha, Jungtae; Jonnal, Ravi S.; Thorn, Karen E.; Qu, Junle; Zhang, Yan; Miller, Donald T.

    2006-05-01

    Current adaptive optics flood-illumination retina cameras operate at low frame rates, acquiring retinal images below seven Hz, which restricts their research and clinical utility. Here we investigate a novel bench top flood-illumination camera that achieves significantly higher frame rates using strobing fiber-coupled superluminescent and laser diodes in conjunction with a scientific-grade CCD. Source strength was sufficient to obviate frame averaging, even for exposures as short as 1/3 msec. Continuous frame rates of 10, 30, and 60 Hz were achieved for imaging 1.8,0.8, and 0.4 deg retinal patches, respectively. Short-burst imaging up to 500 Hz was also achieved by temporarily storing sequences of images on the CCD. High frame rates, short exposure durations (1 msec), and correction of the most significant aberrations of the eye were found necessary for individuating retinal blood cells and directly measuring cellular flow in capillaries. Cone videos of dark adapted eyes showed a surprisingly rapid fluctuation (~1 Hz) in the reflectance of single cones. As further demonstration of the value of the camera, we evaluated the tradeoff between exposure duration and image blur associated with retina motion.

  20. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  1. Synthesizing wide-angle and arbitrary view-point images from a circular camera array

    NASA Astrophysics Data System (ADS)

    Fukushima, Norishige; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    2006-02-01

    We propose a technique of Imaged-Based Rendering(IBR) using a circular camera array. By the result of having recorded the scene as surrounding the surroundings, we can synthesize a more dynamic arbitrary viewpoint images and a wide angle images like a panorama . This method is based on Ray- Space, one of the image-based rendering, like Light Field. Ray-Space is described by the position (x, y) and a direction (θ, φ) of the ray's parameter which passes a reference plane. All over this space, when the camera has been arranged circularly, the orbit of the point equivalent to an Epipor Plane Image(EPI) at the time of straight line arrangement draws a sin curve. Although described in a very clear form, in case a rendering is performed, pixel of which position of which camera being used and the work for which it asks become complicated. Therefore, the position (u, v) of the position (s, t) pixel of a camera like Light Filed redescribes space expression. It makes the position of a camera a polar-coordinates system (r, theta), and is making it close to description of Ray-Space. Thereby, although the orbit of a point serves as a complicated periodic function of periodic 2pi, the handling of a rendering becomes easy. From such space, the same as straight line arrangement, arbitrary viewpoint picture synthesizing is performed only due to a geometric relationship between cameras. Moreover, taking advantage of the characteristic of concentrating on one circular point, we propose the technique of generating a wide-angle picture like a panorama. When synthesizing a viewpoint, since it is overlapped and is recording the ray of all the directions of the same position, this becomes possible. Having stated until now is the case where it is a time of the camera fully having been arranged and a plenoptic sampling being filled. The discrete thing which does not fill a sampling is described from here. When arranging a camera in a straight line and compounding a picture, in spite of assuming the pinhole camera model, an effect like a focus shows up. This is an effect peculiar to Light Field when a sampling is not fully performed, and is called a synthetic aperture. We have compounded all focal images by processing called an "Adaptive Filter" to such a phenomenon. An adaptive filter is the method of making the parallax difference map of perfect viewpoint dependence centering on a viewpoint to make. This is a phenomenon produced even when it has arranged circularly. Then, in circular camera arrangement, this adaptive filter is extended, and all focal pictures are compounded. Although there is a problem that an epipor line is not parallel etc. when it has arranged circularly, extension obtains enough, it comes only out of geometric information, and a certain thing is clarified By taking such a method, it succeeded in performing a wide angle and arbitrary viewpoint image synthesis also from discrete space also from the fully sampled space.

  2. SU-E-E-06: Teaching About the Gamma Camera and Ultrasound Imaging

    SciTech Connect

    Lowe, M; Spiro, A; Vogel, R; Donaldson, N; Gosselin, C

    2015-06-15

    Purpose: Instructional modules on applications of physics in medicine are being developed. The target audience consists of students who have had an introductory undergraduate physics course. This presentation will concentrate on an active learning approach to teach the principles of the gamma camera. There will also be a description of an apparatus to teach ultrasound imaging. Methods: Since a real gamma camera is not feasible in the undergraduate classroom, we have developed two types of optical apparatus that teach the main principles. To understand the collimator, LEDS mimic gamma emitters in the body, and the photons pass through an array of tubes. The distance, spacing, diameter, and length of the tubes can be varied to understand the effect upon the resolution of the image. To determine the positions of the gamma emitters, a second apparatus uses a movable green laser, fluorescent plastic in lieu of the scintillation crystal, acrylic rods that mimic the PMTs, and a photodetector to measure the intensity. The position of the laser is calculated with a centroid algorithm.To teach the principles of ultrasound imaging, we are using the sound head and pulser box of an educational product, variable gain amplifier, rotation table, digital oscilloscope, Matlab software, and phantoms. Results: Gamma camera curriculum materials have been implemented in the classroom at Loyola in 2014 and 2015. Written work shows good knowledge retention and a more complete understanding of the material. Preliminary ultrasound imaging materials were run in 2015. Conclusion: Active learning methods add another dimension to descriptions in textbooks and are effective in keeping the students engaged during class time. The teaching apparatus for the gamma camera and ultrasound imaging can be expanded to include more cases, and could potentially improve students’ understanding of artifacts and distortions in the images.

  3. [The hyperspectral camera side-scan geometric imaging in any direction considering the spectral mixing].

    PubMed

    Wang, Shu-Min; Zhang, Ai-Wu; Hu, Shao-Xing; Sun, Wei-Dong

    2014-07-01

    In order to correct the image distortion in the hyperspectral camera side-scan geometric Imaging, the image pixel geo-referenced algorithm was deduced in detail in the present paper, which is suitable to the linear push-broom camera side-scan imaging on the ground in any direction. It takes the orientation of objects in the navigation coordinates system into account. Combined with the ground sampling distance of geo-referenced image and the area of push broom imaging, the general process of geo-referenced image divided into grids is also presented. The new image rows and columns will be got through the geo-referenced image area dividing the ground sampling distance. Considering the error produced by round rule in the pixel grids generated progress, and the spectral mixing problem caused by traditional direct spectral sampling method in the process of image correction, the improved spectral sampling method based on the weighted fusion method was proposed. It takes the area proportion of adjacent pixels in the new generated pixel as coefficient and then the coefficients are normalized to avoid the spectral overflow. So the new generated pixel is combined with the geo-referenced adjacent pixels spectral. Finally the amounts of push-broom imaging experiments were taken on the ground, and the distortion images were corrected according to the algorithm proposed above. The results show that the linear image distortion correction algorithm is valid and robust. At the same time, multiple samples were selected in the corrected images to verify the spectral data. The results indicate that the improved spectral sampling method is better than the direct spectral sampling algorithm. It provides reference for the application of similar productions on the ground. PMID:25269321

  4. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.

  5. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    SciTech Connect

    Andreozzi, Jacqueline M. Glaser, Adam K.; Zhang, Rongxiao; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.

    2015-02-15

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. Conclusions: The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.

  6. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    PubMed Central

    Andreozzi, Jacqueline M.; Zhang, Rongxiao; Glaser, Adam K.; Jarvis, Lesley A.; Pogue, Brian W.; Gladstone, David J.

    2015-01-01

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. Conclusions: The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024. PMID:25652512

  7. Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Mirotznik, Mark; Saldana, Santiago; Smith, Jarred; Barnard, Ryan

    2010-04-01

    With a multi-lenslet camera, we can capture multiple low resolution (LR) images of the same scene and use them to reconstruct a high resolution (HR) image. For this purpose, two major computation problems need to be solved, the image registration and the super resolution (SR) reconstruction. For the first, one major hurdle is the spatially variant shifts estimation, because objects in a scene are often at different depths, and due to parallax, shifts between imaged objects often vary on a pixel basis. This poses a great computational challenge as the problem is NP complete. The multi-lenslet camera with a single focal plane provides us a unique opportunity to take advantage of the parallax phenomenon, and to directly relate object depths with their shifts, and thus we essentially reduced the parameter space from a two dimensional (x, y) space to a one dimensional depth space, which would greatly reduce the computational cost. As results, not only we have registered LR images, the estimated depth map can also be valuable for some applications. After registration, LR images along with estimated shifts can be used to reconstruct an HR image. A previously developed algorithm will be employed to efficiently compute for a large HR image in the size of 1024x1024.

  8. Single camera system for multi-wavelength fluorescent imaging in the heart.

    PubMed

    Yamanaka, Takeshi; Arafune, Tatsuhiko; Shibata, Nitaro; Honjo, Haruo; Kamiya, Kaichiro; Kodama, Itsuo; Sakuma, Ichiro

    2012-01-01

    Optical mapping has been a powerful method to measure the cardiac electrophysiological phenomenon such as membrane potential(V(m)), intracellular calcium(Ca(2+)), and the other electrophysiological parameters. To measure two parameters simultaneously, the dual mapping system using two cameras is often used. However, the method to measure more than three parameters does not exist. To exploit the full potential of fluorescence imaging, an innovative method to measure multiple, more than three parameters is needed. In this study, we present a new optical mapping system which records multiple parameters using a single camera. Our system consists of one camera, custom-made optical lens units, and a custom-made filter wheel. The optical lens units is designed to focus the fluorescence light at filter position, and form an image on camera's sensor. To obtain optical signals with high quality, efficiency of light collection was carefully discussed in designing the optical system. The developed optical system has object space numerical aperture(NA) 0.1, and image space NA 0.23. The filter wheel was rotated by a motor, which allows filter switching corresponding with needed fluorescence wavelength. The camera exposure and filter switching were synchronized by phase locked loop, which allow this system to record multiple fluorescent signals frame by frame alternately. To validate the performance of this system, we performed experiments to observe V(m) and Ca(2+) dynamics simultaneously (frame rate: 125fps) with Langendorff perfused rabbit heart. Firstly, we applied basic stimuli to the heart base (cycle length: 500ms), and observed planer wave. The waveforms of V(m) and Ca(2+) show the same upstroke synchronized with cycle length of pacing. In addition, we recorded V(m) and Ca(2+) signals during ventricular fibrillation induced by burst pacing. According to these experiments, we showed the efficacy and availability of our method for cardiac electrophysiological research. PMID:23366735

  9. Real-time analysis of laser beams by simultaneous imaging on a single camera chip

    NASA Astrophysics Data System (ADS)

    Piehler, S.; Boley, M.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    The fundamental parameters of a laser beam, such as the exact position and size of the focus or the beam quality factor M² yield vital information both for laser developers and end-users. However, each of these parameters can significantly change on a short time scale due to thermally induced effects in the processing optics or in the laser source itself, leading to process instabilities and non-reproducible results. In order to monitor the transient behavior of these effects, we have developed a camera-based measurement system, which enables full laser beam characterization in online. A novel monolithic beam splitter has been designed which generates a 2D array of images on a single camera chip, each of which corresponds to an intensity cross section of the beam along the propagation axis separated by a well-defined spacing. Thus, using the full area of the camera chip, a large number of measurement planes is achieved, leading to a measurement range sufficient for a full beam characterization conforming to ISO 11146 for a broad range of beam parameters of the incoming beam. The exact beam diameters in each plane are derived by calculation of the 2nd order intensity moments of the individual intensity slices. The processing time needed to carry out both the background filtering and the image processing operations for the full analysis of a single camera image is in the range of a few milliseconds. Hence, the measurement frequency of our system is mainly limited by the frame-rate of the camera.

  10. Optimal Design of Anger Camera for Bremsstrahlung Imaging: Monte Carlo Evaluation

    PubMed Central

    Walrand, Stephan; Hesse, Michel; Wojcik, Randy; Lhommel, Renaud; Jamar, François

    2014-01-01

    A conventional Anger camera is not adapted to bremsstrahlung imaging and, as a result, even using a reduced energy acquisition window, geometric x-rays represent <15% of the recorded events. This increases noise, limits the contrast, and reduces the quantification accuracy. Monte Carlo (MC) simulations of energy spectra showed that a camera based on a 30-mm-thick BGO crystal and equipped with a high energy pinhole collimator is well-adapted to bremsstrahlung imaging. The total scatter contamination is reduced by a factor 10 versus a conventional NaI camera equipped with a high energy parallel hole collimator enabling acquisition using an extended energy window ranging from 50 to 350 keV. By using the recorded event energy in the reconstruction method, shorter acquisition time and reduced orbit range will be usable allowing the design of a simplified mobile gantry. This is more convenient for use in a busy catheterization room. After injecting a safe activity, a fast single photon emission computed tomography could be performed without moving the catheter tip in order to assess the liver dosimetry and estimate the additional safe activity that could still be injected. Further long running time MC simulations of realistic acquisitions will allow assessing the quantification capability of such system. Simultaneously, a dedicated bremsstrahlung prototype camera reusing PMT–BGO blocks coming from a retired PET system is currently under design for further evaluation. PMID:24982849

  11. Portable, stand-off spectral imaging camera for detection of effluents and residues

    NASA Astrophysics Data System (ADS)

    Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason

    2015-06-01

    A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.

  12. High-resolution image digitizing through 12x3-bit RGB-filtered CCD camera

    NASA Astrophysics Data System (ADS)

    Cheng, Andrew Y. S.; Pau, Michael C. Y.

    1996-09-01

    A high resolution computer-controlled CCD image capturing system is developed by using a 12 bits 1024 by 1024 pixels CCD camera and motorized RGB filters to grasp an image with color depth up to 36 bits. The filters distinguish the major components of color and collect them separately while the CCD camera maintains the spatial resolution and detector filling factor. The color separation can be done optically rather than electronically. The operation is simply by placing the capturing objects like color photos, slides and even x-ray transparencies under the camera system, the necessary parameters such as integration time, mixing level and light intensity are automatically adjusted by an on-line expert system. This greatly reduces the restrictions of the capturing species. This unique approach can save considerable time for adjusting the quality of image, give much more flexibility of manipulating captured object even if it is a 3D object with minimal setup fixers. In addition, cross sectional dimension of a 3D capturing object can be analyzed by adapting a fiber optic ring light source. It is particularly useful in non-contact metrology of a 3D structure. The digitized information can be stored in an easily transferable format. Users can also perform a special LUT mapping automatically or manually. Applications of the system include medical images archiving, printing quality control, 3D machine vision, and etc.

  13. High-resolution image digitizing camera for use in quantitative coronary arteriography

    NASA Astrophysics Data System (ADS)

    Muser, Markus H.; Leemann, Thomas; Anliker, M.

    1991-06-01

    Image processing in biomedical applications such as the analysis of electrophoresis gels, digital microscopic imaging or the computer-assisted quantitative analysis of angiographic images recorded on 35 mm cinefilm quantitative coronary arteriography (QCA) can be improved substantially if the conventional TV-based image digitizers are replaced by devices offering a higher geometric resolution and an increased dynamic range. Before high resolution two-dimensional CCD Sensors were introduced a few years ago, these improvements could only be implemented cost-effectively by using scanning devices such as the digital camera developed earlier at the Institute for Biomedical Engineering in Zurich, Switzerland. This camera, based on a 2048-element CCD line array, offers a geometrical resolution of up to 2048 X 3000 pixel and a dynamic range of up to 12 bit. It is being used in the QCA system developed jointly by the Institute for Biomedical Engineering in Zurich, Switzerland and the University of Texas Health Science Center in Houston, Texas, U.S.A. Clinical evaluation of this system as well as an analysis of the technical properties of the X-Ray systems involved indicate that coronary arteriograms can be digitized with a resolution of about 1024 X 1024 pixel without sacrificing measurement accuracy. This fact reduces computational effort and suggests the use of two-dimensional CCD sensors. Therefore, a new digitizing camera based on a KAF1400 (Kodak) full frame sensor suitable for the QCA system has been developed. Its design concept and performance are discussed in the paper.

  14. First responder thermal imaging cameras: establishment of representative performance testing conditions

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Hamins, Anthony; Rowe, Justin

    2006-04-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently there are no standardized performance metrics or test methods available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory (BFRL) at the National Institute of Standards and Technology is conducting research to establish test conditions that best represent the environment in which these cameras are used. First responders may use thermal imagers for field operations ranging from fire attack and search/rescue in burning structures, to hot spot detection in overhaul activities, to detecting the location of hazardous materials. In order to develop standardized performance metrics and test methods that capture the harsh environment in which these cameras may be used, information has been collected from the literature, and from full-scale tests that have been conducted at BFRL. Initial experimental work has focused on temperature extremes and the presence of obscuring media such as smoke. In full-scale tests, thermal imagers viewed a target through smoke, dust, and steam, with and without flames in the field of view. The fuels tested were hydrocarbons (methanol, heptane, propylene, toluene), wood, upholstered cushions, and carpeting with padding. Gas temperatures, CO, CO II, and O II volume fraction, emission spectra, and smoke concentrations were measured. Simple thermal bar targets and a heated mannequin fitted in firefighter gear were used as targets. The imagers were placed at three distances from the targets, ranging from 3 m to 12 m.

  15. Influence of electron dose rate on electron counting images recorded with the K2 camera

    PubMed Central

    Li, Xueming; Zheng, Shawn Q.; Egami, Kiyoshi; Agard, David A.; Cheng, Yifan

    2013-01-01

    A recent technological breakthrough in electron cryomicroscopy (cryoEM) is the development of direct electron detection cameras for data acquisition. By bypassing the traditional phosphor scintillator and fiber optic coupling, these cameras have greatly enhanced sensitivity and detective quantum efficiency (DQE). Of the three currently available commercial cameras, the Gatan K2 Summit was designed specifically for counting individual electron events. Counting further enhances the DQE, allows for practical doubling of detector resolution and eliminates noise arising from the variable deposition of energy by each primary electron. While counting has many advantages, undercounting of electrons happens when more than one electron strikes the same area of the detector within the analog readout period (coincidence loss), which influences image quality. In this work, we characterized the K2 Summit in electron counting mode, and studied the relationship of dose rate and coincidence loss and its influence on the quality of counted images. We found that coincidence loss reduces low frequency amplitudes but has no significant influence on the signal-to-noise ratio of the recorded image. It also has little influence on high frequency signals. Images of frozen hydrated archaeal 20S proteasome (~700 kDa, D7 symmetry) recorded at the optimal dose rate retained both high-resolution signal and low-resolution contrast and enabled calculating a 3.6 Å three-dimensional reconstruction from only 10,000 particles. PMID:23968652

  16. Influence of electron dose rate on electron counting images recorded with the K2 camera.

    PubMed

    Li, Xueming; Zheng, Shawn Q; Egami, Kiyoshi; Agard, David A; Cheng, Yifan

    2013-11-01

    A recent technological breakthrough in electron cryomicroscopy (cryoEM) is the development of direct electron detection cameras for data acquisition. By bypassing the traditional phosphor scintillator and fiber optic coupling, these cameras have greatly enhanced sensitivity and detective quantum efficiency (DQE). Of the three currently available commercial cameras, the Gatan K2 Summit was designed specifically for counting individual electron events. Counting further enhances the DQE, allows for practical doubling of detector resolution and eliminates noise arising from the variable deposition of energy by each primary electron. While counting has many advantages, undercounting of electrons happens when more than one electron strikes the same area of the detector within the analog readout period (coincidence loss), which influences image quality. In this work, we characterized the K2 Summit in electron counting mode, and studied the relationship of dose rate and coincidence loss and its influence on the quality of counted images. We found that coincidence loss reduces low frequency amplitudes but has no significant influence on the signal-to-noise ratio of the recorded image. It also has little influence on high frequency signals. Images of frozen hydrated archaeal 20S proteasome (~700 kDa, D7 symmetry) recorded at the optimal dose rate retained both high-resolution signal and low-resolution contrast and enabled calculating a 3.6 three-dimensional reconstruction from only 10,000 particles. PMID:23968652

  17. Heart imaging by cadmium telluride gamma cameraEuropean Program ``BIOMED'' consortium

    NASA Astrophysics Data System (ADS)

    Scheiber, Ch.; Eclancher, B.; Chambron, J.; Prat, V.; Kazandjan, A.; Jahnke, A.; Matz, R.; Thomas, S.; Warren, S.; Hage-Hali, M.; Regal, R.; Siffert, P.; Karman, M.

    1999-06-01

    Cadmium telluride semiconductor detectors (CdTe) operating at room temperature are attractive for medical imaging because of their good energy resolution providing excellent spatial and contrast resolution. The compactness of the detection system allows the building of small light camera heads which can be used for bedside imaging. A mobile pixellated gamma camera based on 2304 CdTe (pixel size: 3×3 mm, field of view: 15 cm×15 cm) has been designed for cardiac imaging. A dedicated 16-channel integrated circuit has also been designed. The acquisition hardware is fully programmable (DSP card, personal computer-based system). Analytical calculations have shown that a commercial parrallel hole collimator will fit the efficiency/resolution requirements for cardiac applications. Monte-Carlo simulations predict that the Moire effect can be reduced by a 15° tilt of the collimator with respect to the detector grid. A 16×16 CdTe module has been built for the preliminary physical tests. The energy resolution was 6.16±0.6 keV (mean ± standard deviation, n=30). Uniformity was ±10%, improving to ±1% when using a correction table. Test objects (emission data: letters 1.8 mm in width) and cold rods in scatter medium have been acquired. The CdTe images have been compared to those acquired with a conventionnal gamma camera.

  18. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera.

    PubMed

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho

    2015-12-10

    In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods. PMID:26836855

  19. Simulation of Mux Camera Images of the Brazil-China Satellite Earth Resources (CBERS-3)

    NASA Astrophysics Data System (ADS)

    Boggione, G. A.; Fonseca, L. M. G.; Bosch-Puig, S.; Ponzoni, F. J.

    2012-11-01

    The manipulation of the bands of a multispectral sensor for the simulation of a other band represents an attractive possibility for the handling of the data of remote sensing. This work presents a simulation technique for the MUX/CBERS-3. MUX (Multispectral Camera) is a Sino-Brazilian camera that is under development and will be launched on the platform CBERS-3. This study analyzes images generated with worse visual aspect taking in account the attributes of the sensors. Visual and statistics tests were accomplished to confirm the effectiveness of the methodology. The result of this processing is an image containing specific information, extracted and enhanced from the original images. The objective of this study is to propose a generic simulation method of spatial resolution based on the determination of the Modulation Transfer Function (MTF) by Zernike polynomials. The simulated images is important in all applications where is necessary work with images with different resolutions in order to compare the impact of change in terms of the visual analysis, resolution or performance of procedures for automatic image analysis. This research contributes to feasibility studies for future sensors, which is a very common practice to develop simulation procedures before of the construction of the sensor, where potential errors can be identified. This paper suggests a methodology for simulating band which consists of applying the techniques of filtering and resampling of images for the approximation of the desired spatial resolution.

  20. Estimating information from image colors: an application to digital cameras and natural scenes.

    PubMed

    Marín-Franch, Iván; Foster, David H

    2013-01-01

    The colors present in an image of a scene provide information about its constituent elements. But the amount of information depends on the imaging conditions and on how information is calculated. This work had two aims. The first was to derive explicitly estimators of the information available and the information retrieved from the color values at each point in images of a scene under different illuminations. The second was to apply these estimators to simulations of images obtained with five sets of sensors used in digital cameras and with the cone photoreceptors of the human eye. Estimates were obtained for 50 hyperspectral images of natural scenes under daylight illuminants with correlated color temperatures 4,000, 6,500, and 25,000 K. Depending on the sensor set, the mean estimated information available across images with the largest illumination difference varied from 15.5 to 18.0 bits and the mean estimated information retrieved after optimal linear processing varied from 13.2 to 15.5 bits (each about 85 percent of the corresponding information available). With the best sensor set, 390 percent more points could be identified per scene than with the worst. Capturing scene information from image colors depends crucially on the choice of camera sensors. PMID:22450817

  1. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  2. CCD camera baseline calibration and its effects on imaging processing and laser beam analysis

    NASA Astrophysics Data System (ADS)

    Roundy, Carlos B.

    1997-09-01

    CCD cameras are commonly used for many imaging applications, as well as in optical instrumentation applications. These cameras have many excellent characteristics for both scene imaging and laser beam analysis. However, CCD cameras have two characteristics that limit their potential performance. The first limiting factor is the baseline drift of the camera. If the baseline drifts below the digitizer zero, data in the background is lost, and is uncorrectable. If the baseline drifts above the digitizer zero, than a false background is introduced into the scene. This false background is partially correctable by taking a background frame with no input image, and then subtracting that from each imaged frame. ('Partially correctable' is explained in detail later.) The second characteristic that inhibits CCD cameras is their high level of random noise. A typical CCD camera used with an 8-bit digitizer yielding 256 counts, has 2 to 6 counts of random noise in the baseline. The noise is typically Gaussian, and goes both positive and negative about a mean or average baseline level. When normal baseline subtraction occurs, the negative noise components are truncated, leaving only the positive components. These lost negative noise components can distort measurements that rely on low intensity background. Situations exist in which the baseline offset and lost negative noise components are very significant. For example, in image processing, when attempting to distinguish data with a very low contrast between objects, the contrast is compromised by the loss of the negative noise. Secondly the measurement of laser beam widths requires analysis of very low intensity signals far out into the wings of the beam. The intensity is low, but the area is large, and so even small distortion can create significant errors in measuring beam width. The effect of baseline error is particularly significant on the measurement of a laser beam width. This measurement is very important because it gives the size of the beam at the measurement point, it is used in laser divergence measurement, and it is critical for realistic measurement of M2, the ultimate criterion for the quality of a laser beam. One measurement of laser beam width, called second moment, or D4(sigma) , which is the ISO definition of a true laser beam width, is especially sensitive to noise in the baseline. The D4(sigma) measurement method integrates all signals far out into the wings of the beam, and gives particular weight to the noise and signal in the wings. It is impossible to make this measurement without the negative noise components, and without other special algorithms to limit the effect of noise in the wings.

  3. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  4. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  5. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  6. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  7. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  8. X-ray and gamma-ray imaging with multiple-pinhole cameras using a posteriori image synthesis.

    NASA Technical Reports Server (NTRS)

    Groh, G.; Hayat, G. S.; Stroke, G. W.

    1972-01-01

    In 1968, Dicke had suggested that multiple-pinhole camera systems would have significant advantages concerning the SNR in X-ray and gamma-ray astronomy if the multiple images could be somehow synthesized into a single image. The practical development of an image-synthesis method based on these suggestions is discussed. A formulation of the SNR gain theory which is particularly suited for dealing with the proposal by Dicke is considered. It is found that the SNR gain is by no means uniform in all X-ray astronomy applications.

  9. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    NASA Astrophysics Data System (ADS)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-François; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-07-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  10. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    USGS Publications Warehouse

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  11. Camera model and calibration process for high-accuracy digital image metrology of inspection planes

    NASA Astrophysics Data System (ADS)

    Correia, Bento A. B.; Dinis, Joao

    1998-10-01

    High accuracy digital image based metrology must rely on an integrated model of image generation that is able to consider simultaneously the geometry of the camera vs. object positioning, and the conversion of the optical image on the sensor into an electronic digital format. In applications of automated visual inspection involving the analysis of approximately plane objects these models are generally simplified in order to facilitate the process of camera calibration. In this context, the lack of rigor in the determination of the intrinsic parameters in such models is particularly relevant. Aiming at the high accuracy metrology of contours of objects lying on an analysis plane, and involving sub-pixel measurements, this paper presents a three-stage camera model that includes an extrinsic component of perspective distortion and the intrinsic components of radial lens distortion and sensor misalignment. The later two factors are crucial in applications of machine vision that rely on the use of low cost optical components. A polynomial model for the negative radial lens distortion of wide field of view CCTV lenses is also established.

  12. Georeferencing airborne images from a multiple digital camera system by GPS/INS

    NASA Astrophysics Data System (ADS)

    Mostafa, Mohamed Mohamed Rashad

    2000-10-01

    In this thesis, the development and testing of an airborne fully digital multi-sensor system for kinematic mapping is presented. The system acquires two streams of data, namely navigation data and imaging data. The navigation data are obtained by integrating an accurate strapdown Inertial Navigation System with two GPS receivers. The imaging data are acquired by two digital cameras, configured in such a way so as to reduce their geometric limitations. The two digital cameras capture strips of overlapping nadir and oblique images. The INS/GPS-derived trajectory contains the full translational and rotational motion of the carrier aircraft. Thus, image exterior orientation information is extracted from the trajectory, during postprocessing. This approach eliminates the need for ground control when computing 3D positions of objects that appear in the field of view of the system imaging component. Test flights were conducted over the campus of The University of Calgary. Two approaches for calibrating the system are presented, namely pre-mission calibration and in-flight calibration. Testing the system in flight showed that best ground point positioning accuracy at 1:12000 average image scale is 0.2 m (RMS) in easting and northing and 0.3 m (RMS) in height. Preliminary results indicate that major applications of such a system in the future are in the field of digital mapping, at scales of 1:10000 and smaller, and the generation of digital elevation models for engineering applications.

  13. Digital image georeferencing from a multiple camera system by GPS/INS

    NASA Astrophysics Data System (ADS)

    Mostafa, Mohamed M. R.; Schwarz, Klaus-Peter

    In this paper, the development and testing of an airborne fully digital multi-sensor system for digital mapping data acquisition is presented. The system acquires two streams of data, namely, navigation (georeferencing) data and imaging data. The navigation data are obtained by integrating an accurate strapdown inertial navigation system with a differential GPS system (DGPS). The imaging data are acquired by two low-cost digital cameras, configured in such a way so as to reduce their geometric limitations. The two cameras capture strips of overlapping nadir and oblique images. The GPS/INS-derived trajectory contains the full translational and rotational motion of the carrier aircraft. Thus, image exterior orientation information is extracted from the trajectory, during post-processing. This approach eliminates the need for ground control (GCP) when computing 3D positions of objects that appear in the field of view of the system imaging component. Two approaches for calibrating the system are presented, namely, terrestrial calibration and in-flight calibration. Test flights were conducted over the campus of The University of Calgary. Testing the system showed that best ground point positioning accuracy at 1:12,000 average image scale is 0.2 m (RMS) in easting and northing and 0.3 m (RMS) in height. Preliminary results indicate that major applications of such a system in the future are in the field of digital mapping, at scales of 1:5000 and smaller, and in the generation of digital elevation models for engineering applications.

  14. Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera

    NASA Astrophysics Data System (ADS)

    Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas

    2011-03-01

    This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.

  15. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  16. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  17. Advances In The Image Sensor: The Critical Element In The Performance Of Cameras

    NASA Astrophysics Data System (ADS)

    Narabu, Tadakuni

    2011-01-01

    Digital imaging technology and digital imaging products are advancing at a rapid pace. The progress of digital cameras has been particularly impressive. Image sensors now have smaller pixel size, a greater number of pixels, higher sensitivity, lower noise and a higher frame rate. Picture resolution is a function of the number of pixels of the image sensor. The more pixels there are, the smaller each pixel, but the sensitivity and the charge-handling capability of each pixel can be maintained or even be increased by raising the quantum efficiency and the saturation capacity of the pixel per unit area. Sony's many technologies can be successfully applied to CMOS Image Sensor manufacturing toward sub-2.0 um pitch pixel and beyond.

  18. Diffuse reflection imaging of sub-epidermal tissue haematocrit using a simple RGB camera

    NASA Astrophysics Data System (ADS)

    Leahy, Martin J.; O'Doherty, Jim; McNamara, Paul; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Sjoberg, Folke

    2007-05-01

    This paper describes the design and evaluation of a novel easy to use, tissue viability imaging system (TiVi). The system is based on the methods of diffuse reflectance spectroscopy and polarization spectroscopy. The technique has been developed as an alternative to current imaging technology in the area of microcirculation imaging, most notably optical coherence tomography (OCT) and laser Doppler perfusion imaging (LDPI). The system is based on standard digital camera technology, and is sensitive to red blood cells (RBCs) in the microcirculation. Lack of clinical acceptance of both OCT and LDPI fuels the need for an objective, simple, reproducible and portable imaging method that can provide accurate measurements related to stimulus vasoactivity in the microvasculature. The limitations of these technologies are discussed in this paper. Uses of the Tissue Viability system include skin care products, drug development, and assessment spatial and temporal aspects of vasodilation (erythema) and vasoconstriction (blanching).

  19. A correction method of the spatial distortion in planar images from γ-Camera systems

    NASA Astrophysics Data System (ADS)

    Thanasas, D.; Georgiou, E.; Giokaris, N.; Karabarbounis, A.; Maintas, D.; Papanicolas, C. N.; Polychronopoulou, A.; Stiliaris, E.

    2009-06-01

    A methodology for correcting spatial distortions in planar images for small Field Of View (FOV) γ-Camera systems based on Position Sensitive Photomultiplier Tubes (PSPMT) and pixelated scintillation crystals is described. The process utilizes a correction matrix whose elements are derived from a prototyped planar image obtained through irradiation of the scintillation crystal by a 60Co point source and without a collimator. The method was applied to several planar images of a SPECT experiment with a simple phantom construction at different detection angles. The tomographic images are obtained using the Maximum-Likelihood Expectation-Maximization (MLEM) reconstruction technique. Corrected and uncorrected images are compared and the applied correction methodology is discussed.

  20. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    PubMed Central

    Dengel, Lynn T.; More, Mitali J.; Judy, Patricia G.; Petroni, Gina R.; Smolkin, Mark E.; Rehm, Patrice K.; Majewski, Stan; Williams, Mark B.; Slingluff, Craig L.

    2016-01-01

    Objective To evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. Background The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. Methods From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Results Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 19 (1 = best). Conclusions Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma. PMID:21475019

  1. Development of a high-speed CT imaging system using EMCCD camera

    NASA Astrophysics Data System (ADS)

    Thacker, Samta C.; Yang, Kai; Packard, Nathan; Gaysinskiy, Valeriy; Burkett, George; Miller, Stuart; Boone, John M.; Nagarkar, Vivek

    2009-02-01

    The limitations of current CCD-based microCT X-ray imaging systems arise from two important factors. First, readout speeds are curtailed in order to minimize system read noise, which increases significantly with increasing readout rates. Second, the afterglow associated with commercial scintillator films can introduce image lag, leading to substantial artifacts in reconstructed images, especially when the detector is operated at several hundred frames/second (fps). For high speed imaging systems, high-speed readout electronics and fast scintillator films are required. This paper presents an approach to developing a high-speed CT detector based on a novel, back-thinned electron-multiplying CCD (EMCCD) coupled to various bright, high resolution, low afterglow films. The EMCCD camera, when operated in its binned mode, is capable of acquiring data at up to 300 fps with reduced imaging area. CsI:Tl,Eu and ZnSe:Te films, recently fabricated at RMD, apart from being bright, showed very good afterglow properties, favorable for high-speed imaging. Since ZnSe:Te films were brighter than CsI:Tl,Eu films, for preliminary experiments a ZnSe:Te film was coupled to an EMCCD camera at UC Davis Medical Center. A high-throughput tungsten anode X-ray generator was used, as the X-ray fluence from a mini- or micro-focus source would be insufficient to achieve high-speed imaging. A euthanized mouse held in a glass tube was rotated 360 degrees in less than 3 seconds, while radiographic images were recorded at various readout rates (up to 300 fps); images were reconstructed using a conventional Feldkamp cone-beam reconstruction algorithm. We have found that this system allows volumetric CT imaging of small animals in approximately two seconds at ~110 to 190 μm resolution, compared to several minutes at 160 μm resolution needed for the best current systems.

  2. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    SciTech Connect

    Dengel, Lynn T; Judy, Patricia G; Petroni, Gina R; Smolkin, Mark E; Rehm, Patrice K; Majewski, Stan; Williams, Mark B

    2011-04-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 1-9 (1 = best). Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma.

  3. Can We Trust the Use of Smartphone Cameras in Clinical Practice? Laypeople Assessment of Their Image Quality

    PubMed Central

    Boissin, Constance; Fleming, Julian; Wallis, Lee; Hasselberg, Marie

    2015-01-01

    Abstract Background: Smartphone cameras are rapidly being introduced in medical practice, among other devices for image-based teleconsultation. Little is known, however, about the actual quality of the images taken, which is the object of this study. Materials and Methods: A series of nonclinical objects (from three broad categories) were photographed by a professional photographer using three smartphones (iPhone® 4 [Apple, Cupertino, CA], Samsung [Suwon, Korea] Galaxy S2, and BlackBerry® 9800 [BlackBerry Ltd., Waterloo, ON, Canada]) and a digital camera (Canon [Tokyo, Japan] Mark II). In a Web survey a convenience sample of 60 laypeople “blind” to the types of camera assessed the quality of the photographs, individually and best overall. We then measured how each camera scored by object category and as a whole and whether a camera ranked best using a Mann–Whitney U test for 2×2 comparisons. Results: There were wide variations between and within categories in the quality assessments for all four cameras. The iPhone had the highest proportion of images individually evaluated as good, and it also ranked best for more objects compared with other cameras, including the digital one. The ratings of the Samsung or the BlackBerry smartphone did not significantly differ from those of the digital camera. Conclusions: Whereas one smartphone camera ranked best more often, all three smartphones obtained results at least as good as those of the digital camera. Smartphone cameras can be a substitute for digital cameras for the purposes of medical teleconsulation. PMID:26076033

  4. ROSA: A High-cadence, Synchronized Multi-camera Solar Imaging System

    NASA Astrophysics Data System (ADS)

    Christian, Damian Joseph; Jess, D. B.; Mahtioudakis, M.; Keenan, F. P.

    2011-05-01

    The Rapid Oscillations in the Solar Atmosphere (ROSA) instrument is a synchronized, six-camera high-cadence solar imaging instrument developed by Queen's University Belfast and recently commissioned at the Dunn Solar Telescope at the National Solar Observatory in Sunspot, New Mexico, USA, as a common-user instrument. Consisting of six 1k x 1k Peltier-cooled frame-transfer CCD cameras with very low noise (0.02 - 15 e/pixel/s), each ROSA camera is capable of full-chip readout speeds in excess of 30 Hz, and up to 200 Hz when the CCD is windowed. ROSA will allow for multi-wavelength studies of the solar atmosphere at a high temporal resolution. We will present the current instrument set-up and parameters, observing modes, and future plans, including a new high QE camera allowing 15 Hz for Halpha. Interested parties should see https://habu.pst.qub.ac.uk/groups/arcresearch/wiki/de502/ROSA.html

  5. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  6. Fast calculation of bokeh image structure in camera lenses with multiple aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Sivokon, V. P.; Thorpe, M. D.

    2014-12-01

    Three different approaches to calculation of internal structure of bokeh image in camera lenses with two aspheric surfaces are analyzed and compared - the transfer function approach, the beam propagation approach and direct raytracing in an optical design software. The transfer function approach is the fastest and provides accurate results when peak-to-valley of mid-spatial frequency phase modulation induced at the lens exit pupil is below λ/10. Aspheric surfaces are shown to contribute to the bokeh structure differently increasing the complexity of bokeh image especially for offaxis bokeh.

  7. Formulation of image quality prediction criteria for the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.

    1973-01-01

    Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.

  8. The core of the nearby S0 galaxy NGC 7457 imaged with the HST planetary camera

    NASA Technical Reports Server (NTRS)

    Lauer, Tod R.; Faber, S. M.; Holtzman, Jon A.; Baum, William A.; Currie, Douglas G.; Ewald, S. P.; Groth, Edward J.; Hester, J. Jeff; Kelsall, T.

    1991-01-01

    A brief analysis is presented of images of the nearby S0 galaxy NGC 7457 obtained with the HST Planetary Camera. While the galaxy remains unresolved with the HST, the images reveal that any core most likely has r(c) less than 0.052 arcsec. The light distribution is consistent with a gamma = -1.0 power law inward to the resolution limit, with a possible stellar nucleus with luminosity of 10 million solar. This result represents the first observation outside the Local Group of a galaxy nucleus at this spatial resolution, and it suggests that such small, high surface brightness cores may be common.

  9. ROPtool analysis of images acquired using a noncontact handheld fundus camera (Pictor)-a pilot study.

    PubMed

    Vickers, Laura A; Freedman, Sharon F; Wallace, David K; Prakalapakorn, S Grace

    2015-12-01

    The presence of plus disease is the primary indication for treatment of retinopathy of prematurity (ROP), but its diagnosis is subjective and prone to error. ROPtool is a semiautomated computer program that quantifies vascular tortuosity and dilation. Pictor is an FDA-approved, noncontact, handheld digital fundus camera. This pilot study evaluated ROPtool's ability to analyze high-quality Pictor images of premature infants and its accuracy in diagnosing plus disease compared to clinical examination. In our small sample of images, ROPtool could trace and identify the presence of plus disease with high accuracy. PMID:26691046

  10. Electron-tracking Compton gamma-ray camera for small animal and phantom imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Kimura, Hiroyuki; Amano, Hiroo; Nakamoto, Yuji; Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki; Kawashima, Hidekazu; Ueda, Masashi; Okada, Tomohisa; Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki; Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji; Ogawa, Koichi; Togashi, Kaori; Saji, Hideo; Tanimori, Toru

    2010-11-01

    We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.

  11. Electron density study of KNiF3 by the vacuum-camera-imaging plate method.

    PubMed

    Zhurova; Zhurov; Tanaka

    1999-12-01

    The electron density measurements of KNiF(3), nickel potassium trifluoride, by the vacuum-camera-imaging plate (VCIP) method and using a four-circle diffractometer with scintillation counter, are performed and compared. In the IP (imaging plate) case evacuation allowed the background around peaks to be reduced 50 times, which significantly increased the accuracy of the data, especially for high-angle reflections. A new VIIPP program for visualizing and integration of IP data was designed to treat the data, in which the correction for oblique incidence was applied. The resulting electron density reproduces all the features of the accurate conventional measurement. PMID:10927433

  12. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  13. Development of a portable 3CCD camera system for multispectral imaging of biological samples.

    PubMed

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  14. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  15. Performance of CID camera X-ray imagers at NIF in a harsh neutron environment

    SciTech Connect

    Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Piston, K. W.; Moody, J. D.; James, D. L.; Ness, R. A.; Haugh, M. J.; Lee, J. J.; Romano, E. D.

    2013-09-01

    Charge-injection devices (CIDs) are solid-state 2D imaging sensors similar to CCDs, but their distinct architecture makes CIDs more resistant to ionizing radiation.1–3 CID cameras have been used extensively for X-ray imaging at the OMEGA Laser Facility4,5 with neutron fluences at the sensor approaching 109 n/cm2 (DT, 14 MeV). A CID Camera X-ray Imager (CCXI) system has been designed and implemented at NIF that can be used as a rad-hard electronic-readout alternative for time-integrated X-ray imaging. This paper describes the design and implementation of the system, calibration of the sensor for X-rays in the 3 – 14 keV energy range, and preliminary data acquired on NIF shots over a range of neutron yields. The upper limit of neutron fluence at which CCXI can acquire useable images is ~ 108 n/cm2 and there are noise problems that need further improvement, but the sensor has proven to be very robust in surviving high yield shots (~ 1014 DT neutrons) with minimal damage.

  16. Determining 3D flow fields via multi-camera light field imaging.

    PubMed

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  17. Application of adaptive optics in retinal imaging: a quantitative and clinical comparison with standard cameras

    NASA Astrophysics Data System (ADS)

    Barriga, E. S.; Erry, G.; Yang, S.; Russell, S.; Raman, B.; Soliz, P.

    2005-04-01

    Aim: The objective of this project was to evaluate high resolution images from an adaptive optics retinal imager through comparisons with standard film-based and standard digital fundus imagers. Methods: A clinical prototype adaptive optics fundus imager (AOFI) was used to collect retinal images from subjects with various forms of retinopathy to determine whether improved visibility into the disease could be provided to the clinician. The AOFI achieves low-order correction of aberrations through a closed-loop wavefront sensor and an adaptive optics system. The remaining high-order aberrations are removed by direct deconvolution using the point spread function (PSF) or by blind deconvolution when the PSF is not available. An ophthalmologist compared the AOFI images with standard fundus images and provided a clinical evaluation of all the modalities and processing techniques. All images were also analyzed using a quantitative image quality index. Results: This system has been tested on three human subjects (one normal and two with retinopathy). In the diabetic patient vascular abnormalities were detected with the AOFI that cannot be resolved with the standard fundus camera. Very small features, such as the fine vascular structures on the optic disc and the individual nerve fiber bundles are easily resolved by the AOFI. Conclusion: This project demonstrated that adaptive optic images have great potential in providing clinically significant detail of anatomical and pathological structures to the ophthalmologist.

  18. Improved Digitization of Lunar Mare Ridges with LROC Derived Products

    NASA Astrophysics Data System (ADS)

    Crowell, J. M.; Robinson, M. S.; Watters, T. R.; Bowman-Cisneros, E.; Enns, A. C.; Lawrence, S.

    2011-12-01

    Lunar wrinkle ridges (mare ridges) are positive-relief structures formed from compressional stress in basin-filling flood basalt deposits [1]. Previous workers have measured wrinkle ridge orientations and lengths to investigate their spatial distribution and infer basin-localized stress fields [2,3]. Although these plots include the most prominent mare ridges and their general trends, they may not have fully captured all of the ridges, particularly the smaller-scale ridges. Using Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) global mosaics and derived topography (100m pixel scale) [4], we systematically remapped wrinkle ridges in Mare Serenitatis. By comparing two WAC mosaics with different lighting geometry, and shaded relief maps made from a WAC digital elevation model (DEM) [5], we observed that some ridge segments and some smaller ridges are not visible in previous structure maps [2,3]. In the past, mapping efforts were limited by a fixed Sun direction [6,7]. For systematic mapping we created three shaded relief maps from the WAC DEM with solar azimuth angles of 0°, 45°, and 90°, and a fourth map was created by combining the three shaded reliefs into one, using a simple averaging scheme. Along with the original WAC mosaic and the WAC DEM, these four datasets were imported into ArcGIS, and the mare ridges of Imbrium, Serenitatis, and Tranquillitatis were digitized from each of the six maps. Since the mare ridges are often divided into many ridge segments [8], each major component was digitized separately, as opposed to the ridge as a whole. This strategy enhanced our ability to analyze the lengths, orientations, and abundances of these ridges. After the initial mapping was completed, the six products were viewed together to identify and resolve discrepancies in order to produce a final wrinkle ridge map. Comparing this new mare ridge map with past lunar tectonic maps, we found that many mare ridges were not recorded in the previous works. It was noted in some cases, the lengths and orientations of previously digitized ridges were different than those of the ridges digitized in this study. This method of multi-map digitizing allows for a greater accuracy in spatial characterization of mare ridges than previous methods. We intend to map mare ridges on a global scale, creating a more comprehensive ridge map due to higher resolution. References Cited: [1] Schultz P.H. (1976) Moon Morphology, 308. [2] Wilhelms D.E. (1987) USGS Prof. Paper 1348, 5A-B. [3] Carr, M.H. (1966) USGS Geologic Atlas of the Moon, I-498. [4] Robinson M.S. (2010) Space Sci. Rev., 150:82. [5] Scholten F. et al. (2011) LPSC XLII, 2046. [6] Fielder G. and Kiang T. (1962) The Observatory: No. 926, 8. [7] Watters T.R. and Konopliv A.S. (2001) Planetary and Space Sci. 49. 743-748. [8] Aubele J.C. (1988) LPSC XIX, 19.

  19. The computation of cloud base height from paired whole-sky imaging cameras

    SciTech Connect

    Allmen, M.C.; Kegelmeyer, W.P. Jr.

    1994-03-01

    A major goal for global change studies is to improve the accuracy of general circulation models (GCMs) capable of predicting the timing and magnitude of greenhouse gas-induced global warming. Research has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climate responses to human activity. Of particular value to reducing the uncertainties associated with cloud-radiation interactions is the measurement of cloud base height (CBH), both because it is a dominant factor in determining the infrared radiative properties of clouds with respect to the earth`s surface and lower atmosphere and because CBHs are essential to measuring cloud cover fraction. We have developed a novel approach to the extraction of cloud base height from pairs of whole sky imaging (WSI) cameras. The core problem is to spatially register cloud fields from widely separated WSI cameras; this complete, triangulation provides the CBH measurements. The wide camera separation (necessary to cover the desired observation area) and the self-similarity of clouds defeats all standard matching algorithms when applied to static views of the sky. To address this, our approach is based on optical flow methods that exploit the fact that modern WSIs provide sequences of images. We will describe the algorithm and present its performance as evaluated both on real data validated by ceilometer measurements and on a variety of simulated cases.

  20. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  1. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system matched the clinical results. Digital image measurement of specimen deformation based on CCD cameras and Image J software has good perspective for application in biomechanical research, which has the advantage of simple optical setup, no-contact, high precision, and no special requirement of test environment.

  2. Characterization of digital cameras for reflected ultraviolet photography; implications for qualitative and quantitative image analysis during forensic examination.

    PubMed

    Garcia, Jair E; Wilksch, Philip A; Spring, Gale; Philp, Peta; Dyer, Adrian

    2014-01-01

    Reflected ultraviolet imaging techniques allow for the visualization of evidence normally outside the human visible spectrum. Specialized digital cameras possessing extended sensitivity can be used for recording reflected ultraviolet radiation. Currently, there is a lack of standardized methods for ultraviolet image recording and processing using digital cameras, potentially limiting the implementation and interpretation. A methodology is presented for processing ultraviolet images based on linear responses and the sensitivity of the respective color channels. The methodology is applied to a FujiS3 UVIR camera, and a modified Nikon D70s camera, to reconstruct their respective spectral sensitivity curves between 320 and 400 nm. This method results in images with low noise and high contrast, suitable for qualitative and/or quantitative analysis. The application of this methodology is demonstrated in the recording of latent fingerprints. PMID:24117678

  3. Digital tomographic imaging with time-modulated pseudorandom coded aperture and Anger camera.

    PubMed

    Koral, K F; Rogers, W L; Knoll, G F

    1975-05-01

    The properties of a time-modulated pseudorandom coded aperture with digital reconstruction are compared with those of conventional collimators used in gamma-ray imaging. The theory of this coded aperture is given and the signal-to-noise ratio in an element of the reconstructed image is shown to depend on the entire source distribution. Experimental results with a preliminary 4 X 4-cm pseudorandom coded aperture and an Anger camera are presented. These results include phantom and human thyroid images and tomographic images of a rat bone scan. The experimental realization of the theoretical advantages of the time-modulated coded aperture gives reason for continuing the clinical implementation and further development of the method. PMID:1194994

  4. Non-contact imaging of venous compliance in humans using an RGB camera

    NASA Astrophysics Data System (ADS)

    Nakano, Kazuya; Satoh, Ryota; Hoshi, Akira; Matsuda, Ryohei; Suzuki, Hiroyuki; Nishidate, Izumi

    2015-04-01

    We propose a technique for non-contact imaging of venous compliance that uses the red, green, and blue (RGB) camera. Any change in blood concentration is estimated from an RGB image of the skin, and a regression formula is calculated from that change. Venous compliance is obtained from a differential form of the regression formula. In vivo experiments with human subjects confirmed that the proposed method does differentiate the venous compliances among individuals. In addition, the image of venous compliance is obtained by performing the above procedures for each pixel. Thus, we can measure venous compliance without physical contact with sensors and, from the resulting images, observe the spatial distribution of venous compliance, which correlates with the distribution of veins.

  5. Design and realization of an image mosaic system on the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Wang, Peng; Zhu, Hai bin; Li, Yan; Zhang, Shao jun

    2015-08-01

    It has long been difficulties in aerial photograph to stitch multi-route images into a panoramic image in real time for multi-route flight framing CCD camera with very large amount of data, and high accuracy requirements. An automatic aerial image mosaic system based on GPU development platform is described in this paper. Parallel computing of SIFT feature extraction and matching algorithm module is achieved by using CUDA technology for motion model parameter estimation on the platform, which makes it's possible to stitch multiple CCD images in real-time. Aerial tests proved that the mosaic system meets the user's requirements with 99% accuracy and 30 to 50 times' speed improvement of the normal mosaic system.

  6. High performance gel imaging with a commercial single lens reflex camera

    NASA Astrophysics Data System (ADS)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  7. Measuring SO2 ship emissions with an ultra-violet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2013-11-01

    Over the last few years fast-sampling ultra-violet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical fluxes ~1-10 kg s-1) and natural sources (e.g. volcanoes; typical fluxes ~10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and fluxes. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and fluxes of SO2 (typical fluxes ~0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the fluxes and path concentrations can be retrieved in real-time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and fluxes determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (>10 Hz) from a single camera. Typical accuracies ranged from 10-30% in path concentration and 10-40% in flux estimation. Despite the ease of use and ability to determine SO2 fluxes from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes.

  8. A pnCCD-based, fast direct single electron imaging camera for TEM and STEM

    NASA Astrophysics Data System (ADS)

    Ryll, H.; Simson, M.; Hartmann, R.; Holl, P.; Huth, M.; Ihle, S.; Kondo, Y.; Kotula, P.; Liebel, A.; Müller-Caspary, K.; Rosenauer, A.; Sagawa, R.; Schmidt, J.; Soltau, H.; Strüder, L.

    2016-04-01

    We report on a new camera that is based on a pnCCD sensor for applications in scanning transmission electron microscopy. Emerging new microscopy techniques demand improved detectors with regards to readout rate, sensitivity and radiation hardness, especially in scanning mode. The pnCCD is a 2D imaging sensor that meets these requirements. Its intrinsic radiation hardness permits direct detection of electrons. The pnCCD is read out at a rate of 1,150 frames per second with an image area of 264 x 264 pixel. In binning or windowing modes, the readout rate is increased almost linearly, for example to 4000 frames per second at 4× binning (264 x 66 pixel). Single electrons with energies from 300 keV down to 5 keV can be distinguished due to the high sensitivity of the detector. Three applications in scanning transmission electron microscopy are highlighted to demonstrate that the pnCCD satisfies experimental requirements, especially fast recording of 2D images. In the first application, 65536 2D diffraction patterns were recorded in 70 s. STEM images corresponding to intensities of various diffraction peaks were reconstructed. For the second application, the microscope was operated in a Lorentz-like mode. Magnetic domains were imaged in an area of 256 x 256 sample points in less than 37 seconds for a total of 65536 images each with 264 x 132 pixels. Due to information provided by the two-dimensional images, not only the amplitude but also the direction of the magnetic field could be determined. In the third application, millisecond images of a semiconductor nanostructure were recorded to determine the lattice strain in the sample. A speed-up in measurement time by a factor of 200 could be achieved compared to a previously used camera system.

  9. MegaCam: the new Canada-France-Hawaii Telescope wide-field imaging camera

    NASA Astrophysics Data System (ADS)

    Boulade, Olivier; Charlot, Xavier; Abbon, P.; Aune, Stephan; Borgeaud, Pierre; Carton, Pierre-Henri; Carty, M.; Da Costa, J.; Deschamps, H.; Desforge, D.; Eppell, Dominique; Gallais, Pascal; Gosset, L.; Granelli, Remy; Gros, Michel; de Kat, Jean; Loiseau, Denis; Ritou, J.-.; Rouss, Jean Y.; Starzynski, Pierre; Vignal, Nicolas; Vigroux, Laurent G.

    2003-03-01

    MegaCam is an imaging camera with a 1 square degree field of view for the new prime focus of the 3.6 meter Canada-France-Hawaii Telescope. This instrument will mainly be used for large deep surveys ranging from a few to several thousands of square degrees in sky coverage and from 24 to 28.5 in magnitude. The camera is built around a CCD mosaic approximately 30 cm square, made of 40 large thinned CCD devices for a total of 20 K x 18 K pixels. It uses a custom CCD controller, a closed cycle cryocooler based on a pulse tube, a 1 m diameter half-disk as a shutter, a juke-box for the selection of the filters, and programmable logic controllers and fieldbus network to control the different subsystems. The instrument was delivered to the observatory on June 10, 2002 and first light is scheduled in early October 2002.

  10. Conceptual design of a camera system for neutron imaging in low fusion power tokamaks

    NASA Astrophysics Data System (ADS)

    Xie, X.; Yuan, X.; Zhang, X.; Nocente, M.; Chen, Z.; Peng, X.; Cui, Z.; Du, T.; Hu, Z.; Li, T.; Fan, T.; Chen, J.; Li, X.; Zhang, G.; Yuan, G.; Yang, J.; Yang, Q.

    2016-02-01

    The basic principles for designing a camera system for neutron imaging in low fusion power tokamaks are illustrated for the case of the HL-2A tokamak device. HL-2A has an approximately circular cross section, with total neutron yields of about 1012 n/s under 1 MW neutral beam injection (NBI) heating. The accuracy in determining the width of the neutron emission profile and the plasma vertical position are chosen as relevant parameters for design optimization. Typical neutron emission profiles and neutron energy spectra are calculated by Monte Carlo method. A reference design is assumed, for which the direct and scattered neutron fluences are assessed and the neutron count profile of the neutron camera is obtained. Three other designs are presented for comparison. The reference design is found to have the best performance for assessing the width of peaked to broadened neutron emission profiles. It also performs well for the assessment of the vertical position.

  11. Preliminary Monte Carlo study of coded aperture imaging with a CZT gamma camera system for scintimammography

    NASA Astrophysics Data System (ADS)

    Alnafea, M.; Wells, K.; Spyrou, N. M.; Guy, M.

    2007-04-01

    A solid-state Cadmium Zinc Telluride (CZT) gamma camera in conjunction with a Modified Uniformly Redundant Arrays (MURAs) Coded Aperture (CA) scintimammography (SM) system has been investigated using Monte Carlo simulation. The motivation is to utilise the enhanced energy resolution of CZT detectors compared to standard scintillation-based gamma cameras for scatter rejection. The effects of variations in lesion sizes and tumour-to-background-ratio were simulated in a 3D phantom geometry. Despite the enhanced energy resolution, we find that the open field geometry associated with the MURA CA imaging nonetheless requires shielding from non-specific background tracer uptake, and correction for out-of-plane activity. We find that a TBR of 20:1 is required for visualising a 10 mm wide lesion.

  12. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  13. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H.; McCurnin, T.W.; Sanchez, P.G.

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  14. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  15. First responder thermal imaging cameras: development of performance metrics and test methods

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Hamins, Anthony

    2006-05-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently, there are no standardized performance metrics or test methods available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory at the National Institute of Standards and Technology is developing performance evaluation techniques that combine aspects of conventional metrics such as the contrast transfer function (CTF), the minimum resolvable temperature difference (MRTD), and noise equivalent temperature difference (NETD) with test methods that accommodate the special conditions in which first responders use these instruments. First responders typically use thermal imagers when their vision is obscured due to the presence of smoke, dust, fog, and/or the lack of visible light, and in cases when the ambient temperature is uncomfortably hot. Testing has shown that image contrast, as measured using a CTF calculation, suffers when a target is viewed through obscuring media. A proposed method of replacing the trained observer required for the conventional MRTD test method with a CTF calculation is presented. A performance metric that combines thermal resolution with target temperature and sensitivity mode shifts is also being investigated. Results of this work will support the establishment of standardized performance metrics and test methods for thermal imaging cameras that are meaningful to the first responders that use them.

  16. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  17. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation.

    PubMed

    Kazmi, S M Shams; Balial, Satyajit; Dunn, Andrew K

    2014-07-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  18. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation

    PubMed Central

    Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.

    2014-01-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  19. Loop closure detection by algorithmic information theory: implemented on range and camera image data.

    PubMed

    Ravari, Alireza Norouzzadeh; Taghirad, Hamid D

    2014-10-01

    In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363

  20. Superresolution imaging system on innovative localization microscopy technique with commonly using dyes and CMOS camera

    NASA Astrophysics Data System (ADS)

    Dudenkova, V.; Zakharov, Yu.

    2015-05-01

    Optical methods for study biological tissue and cell at micro- and nanoscale level step now over diffraction limit. Really it is single molecule localization techniques that achieve the highest spatial resolution. One of those techniques, called bleaching/blinking assisted localization microscopy (BaLM) relies on the intrinsic bleaching and blinking behavior characteristic of commonly used fluorescent probes. This feature is the base of BaLM image series acquisition and data analysis. In our work blinking of single fluorescent spot against a background of others comes to light by subtraction of time series successive frames. Then digital estimation gives the center of the spot as a point of fluorescent molecule presence, which transfers to other image with higher resolution according to accuracy of the center localization. It is a part of image with improved resolution. This approach allows overlapping fluorophores and not requires single photon sensitivity, so we use 8,8 megapixel CMOS camera with smallest (1.55 um) pixel size. This instrumentation on the base of Zeiss Axioscope 2 FS MOT allows image transmission from object plane to matrix on a scale less than 100 nm/pixel using 20x-objective, thereafter the same resolution and 5 times more field of view as compared to EMCCD camera with 6 um pixel size. To optimize excitation light power, frame rate and gain of camera we have made appropriate estimations taking into account fluorophores behaviors features and equipment characteristics. Finely we have clearly distinguishable details of the sample in the processed field of view.

  1. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.

  2. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from −40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at −25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at −25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  3. MR-i: high-speed dual-cameras hyperspectral imaging FTS

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Roy, Claude; Vallières, Christian; Lévesque, Luc

    2011-11-01

    From scientific research to deployable operational solutions, Fourier-Transform Infrared (FT-IR) spectroradiometry is widely used for the development and enhancement of military and research applications. These techniques include targets IR signature characterization, development of advanced camouflage techniques, aircraft engine's plumes monitoring, meteorological sounding and atmospheric composition analysis such as detection and identification of chemical threats. Imaging FT-IR spectrometers have the capability of generating 3D images composed of multiple spectra associated with every pixel of the mapped scene. That data allow for accurate spatial characterization of target's signature by resolving spatially the spectral characteristics of the observed scenes. MR-i is the most recent addition to the MR product line series and generates spectral data cubes in the MWIR and LWIR. The instrument is designed to acquire the spectral signature of various scenes with high temporal, spatial and spectral resolution. The four port architecture of the interferometer brings modularity and upgradeability since the two output ports of the instrument can be populated with different combinations of detectors (imaging or not). For instance to measure over a broad spectral range from 1.3 to 13 μm, one output port can be equipped with a LWIR camera while the other port is equipped with a MWIR camera. Both ports can be equipped with cameras serving the same spectral range but set at different sensitivity levels in order to increase the measurement dynamic range and avoid saturation of bright parts of the scene while simultaneously obtaining good measurement of the faintest parts of the scene. Various telescope options are available for the input port. Overview of the instrument capabilities will be presented as well as test results and results from field trials for a configuration with two MWIR cameras. That specific system is dedicated to the characterization of airborne targets. The expanded dynamic range allowed by the two MWIR cameras enables to simultaneously measure the spectral signature of the cold background and of the warmest elements of the scene (flares, jet engines exhausts, etc.).

  4. Development of proton CT imaging system using plastic scintillator and CCD camera.

    PubMed

    Tanaka, Sodai; Nishio, Teiji; Matsushita, Keiichiro; Tsuneda, Masato; Kabuki, Shigeto; Uesaka, Mitsuru

    2016-06-01

    A proton computed tomography (pCT) imaging system was constructed for evaluation of the error of an x-ray CT (xCT)-to-WEL (water-equivalent length) conversion in treatment planning for proton therapy. In this system, the scintillation light integrated along the beam direction is obtained by photography using the CCD camera, which enables fast and easy data acquisition. The light intensity is converted to the range of the proton beam using a light-to-range conversion table made beforehand, and a pCT image is reconstructed. An experiment for demonstration of the pCT system was performed using a 70 MeV proton beam provided by the AVF930 cyclotron at the National Institute of Radiological Sciences. Three-dimensional pCT images were reconstructed from the experimental data. A thin structure of approximately 1 mm was clearly observed, with spatial resolution of pCT images at the same level as that of xCT images. The pCT images of various substances were reconstructed to evaluate the pixel value of pCT images. The image quality was investigated with regard to deterioration including multiple Coulomb scattering. PMID:27191962

  5. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    NASA Astrophysics Data System (ADS)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the safe operation of these devices is still an issue, certainly when flying on locations which can be crowded (such as students on excavations or tourists walking around historic places). As the future of UAS regulation remains unclear, this talk presents an alternative approach to aerial imaging: the Fotokite. Developed at the ETH Zürich, the Fotokite is a tethered flying camera that is essentially a multi-copter connected to the ground with a taut tether to achieve controlled flight. Crucially, it relies solely on onboard IMU (Inertial Measurement Unit) measurements to fly, launches in seconds, and is classified as not a UAS (Unmanned Aerial System), e.g. in the latest FAA (Federal Aviation Administration) UAS proposal. As a result it may be used for imaging cultural heritage in a variety of environments and settings with minimal training by non-experienced pilots. Furthermore, it is subject to less extensive certification, regulation and import/export restrictions, making it a viable solution for use at a greater range of sites than traditional methods. Unlike a balloon or a kite it is not subject to particular weather conditions and, thanks to active stabilization, is capable of a variety of intelligent flight modes. Finally, it is compact and lightweight, making it easy to transport and deploy, and its lack of reliance on GNSS (Global Navigation Satellite System) makes it possible to use in urban, overbuilt areas. After outlining its operating principles, the talk will present some archaeological case studies in which the Fotokite was used, hereby assessing its capabilities compared to the conventional UAS's on the market.

  6. Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross

    Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.

  7. MEMS-based thermally-actuated image stabilizer for cellular phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%.

  8. Toward Real-time quantum imaging with a single pixel camera

    SciTech Connect

    Lawrie, Benjamin J; Pooser, Raphael C

    2013-01-01

    We present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively transmit macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. In low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imaging with sensitivity below the photon shot noise limit.

  9. Near-Infrared Imaging Using a High-Speed Monitoring Near Infrared Hyperspectral Camera (Compovision).

    PubMed

    Ishikawa, Daitaro; Motomura, Asako; Igarashi, Yoko; Ozaki, Yukihiro

    2015-04-01

    This review paper reports near-infrared (NIR) imaging studies using a newly-developed NIR camera, Compovision. Compovision can measure a significantly wide area of 150 mmX 250 mm at high speed of between 2 and 5 s. It enables a wide spectral region measurement in the 1,000-2,350 nm range at 6 nm intervals. We investigated the potential of Compovision in the applications to industrial problems such as the evaluation of pharmaceutical tablets and polymers. Our studies have demonstrated that NIR imaging based on Compovision can solve several issues such as long acquisition times and relatively low sensitivity of detection. NIR imaging with Compovision is strongly expected to be applied not only to pharmaceutical tablet monitoring and polymer characterization but also to various applications such as those to food products, biomedical substances and organic and inorganic materials. PMID:26197564

  10. First experience DaTSCAN imaging using cadmium-zinc-telluride gamma camera SPECT.

    PubMed

    Farid, Karim; Queneau, Mathieu; Guernou, Mohamed; Lussato, David; Poullias, Xavier; Petras, Slavomir; Caillat-Vigneron, Nadine; Songy, Bernard

    2012-08-01

    We report our first experience of brain DaTSCAN SPECT imaging using cadmium-zinc-telluride gamma camera (CZT-GC) in 2 cases: a 64-year-old patient suffering from essential tremor and a 73-year-old patient presenting with atypical bilateral extrapyramidal syndrome. In both cases, 2 different acquisitions were performed and compared, using a double-head Anger-GC, followed immediately by a second acquisition on CZT-GC. There were no significant visual differences between images generated by different GC. Our first result suggests that DaTSCAN SPECT is feasible on CZT-GC, allowing both injected dose and acquisition time reductions without compromising image quality. This experience needs to be evaluated in larger series. PMID:22785531

  11. Measuring multivariate subjective image quality for still and video cameras and image processing system components

    NASA Astrophysics Data System (ADS)

    Nyman, Göte; Leisti, Tuomas; Lindroos, Paul; Radun, Jenni; Suomi, Sini; Virtanen, Toni; Olives, Jean-Luc; Oja, Joni; Vuori, Tero

    2008-01-01

    The subjective quality of an image is a non-linear product of several, simultaneously contributing subjective factors such as the experienced naturalness, colorfulness, lightness, and clarity. We have studied subjective image quality by using a hybrid qualitative/quantitative method in order to disclose relevant attributes to experienced image quality. We describe our approach in mapping the image quality attribute space in three cases: still studio image, video clips of a talking head and moving objects, and in the use of image processing pipes for 15 still image contents. Naive observers participated in three image quality research contexts in which they were asked to freely and spontaneously describe the quality of the presented test images. Standard viewing conditions were used. The data shows which attributes are most relevant for each test context, and how they differentiate between the selected image contents and processing systems. The role of non-HVS based image quality analysis is discussed.

  12. ELOp EO/IR LOROP camera: image stabilization for dual-band whiskbroom scanning photography

    NASA Astrophysics Data System (ADS)

    Petrushevsky, Vladimir; Karklinsky, Yehoshua; Chernobrov, Arie

    2003-01-01

    The ELOP dual band LOROP camera was designed as a payload of a 300 gal reconnaissance pod capable of being carried by a single-engineerd fighter aircrat like F-16. The optical arrangement provides coincidence of the IR and EO optical axes, as well as equality of the fields-of-view. These features allow the same sacn coverage to be achieved, and the same gimbals control software to be used for the visible-light-only, IR-only and simultaneous dual band photography. Because of intensive, broadband vibration existing in teh pod environment, special attention was given to image stabilization system. Nevertheless, residual vibraiton still exists in a wide frequency range spreading from zero frequency to the detector integration rate and beyond it. Hence, evaluation of the camera performance could not rely on the well-known analytical solutions for MTFMOTION. The image motion is deinfed in terms of the Power Spectral Density throughout the whole frequency range of interest. The expected MTFMOTION is calculated numerically using a statistical approach. Aspects of the staggered-structure IR detecotr application in oblique photography are discussed. Particuarly, the ground footprint of the IR detector is much wider along-scan than one of the EO detector, requiring considerations to be implemented in order to prevent IR image deformation.

  13. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  14. SWIR Geiger-mode APD detectors and cameras for 3D imaging

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Krishnamachari, Uppili; Owens, Mark; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2014-06-01

    The operation of avalanche photodiodes in Geiger mode by arming these detectors above their breakdown voltage provides high-performance single photon detection in a robust solid-state device platform. Moreover, these devices are ideally suited for integration into large format focal plane arrays enabling single photon imaging. We describe the design and performance of short-wave infrared 3D imaging cameras with focal plane arrays (FPAs) based on Geigermode avalanche photodiodes (GmAPDs) with single photon sensitivity for laser radar imaging applications. The FPA pixels incorporate InP/InGaAs(P) GmAPDs for the detection of single photons with high efficiency and low dark count rates. We present results and attributes of fully integrated camera sub-systems with 32 × 32 and 128 × 32 formats, which have 100 μm pitch and 50 μm pitch, respectively. We also address the sensitivity of the fundamental GmAPD detectors to radiation exposure, including recent results that correlate detector active region volume to sustainable radiation tolerance levels.

  15. Static laser speckle contrast analysis for noninvasive burn diagnosis using a camera-phone imager.

    PubMed

    Ragol, Sigal; Remer, Itay; Shoham, Yaron; Hazan, Sivan; Willenz, Udi; Sinelnikov, Igor; Dronov, Vladimir; Rosenberg, Lior; Bilenca, Alberto

    2015-08-01

    Laser speckle contrast analysis (LASCA) is an established optical technique for accurate widefield visualization of relative blood perfusion when no or minimal scattering from static tissue elements is present, as demonstrated, for example, in LASCA imaging of the exposed cortex. However, when LASCA is applied to diagnosis of burn wounds, light is backscattered from both moving blood and static burn scatterers, and thus the spatial speckle contrast includes both perfusion and nonperfusion components and cannot be straightforwardly associated to blood flow. We extract from speckle contrast images of burn wounds the nonperfusion (static) component and discover that it conveys useful information on the ratio of static-to-dynamic scattering composition of the wound, enabling identification of burns of different depth in a porcine model in vivo within the first 48 h postburn. Our findings suggest that relative changes in the static-to-dynamic scattering composition of burns can dominate relative changes in blood flow for burns of different severity. Unlike conventional LASCA systems that employ scientific or industrial-grade cameras, our LASCA system is realized here using a camera phone, showing the potential to enable LASCA-based burn diagnosis with a simple imager. PMID:26271055

  16. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera

    PubMed Central

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D.; Sclabassi, Robert J.; Mao, Zhi-Hong; Sun, Mingui

    2011-01-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  17. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including humancomputer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  18. The Effect of Light Conditions on Photoplethysmographic Image Acquisition Using a Commercial Camera

    PubMed Central

    Liu, He; Wang, Yadong

    2014-01-01

    Cameras embedded in consumer devices have previously been used as physiological information sensors. The waveform of the photoplethysmographic image (PPGi) signals may be significantly affected by the light spectra and intensity. The purpose of this paper is to evaluate the performance of PPGi waveform acquisition in the red, green, and blue channels using a commercial camera in different light conditions. The system, developed for this paper, comprises of a commercial camera and light sources with varied spectra and intensities. Signals were acquired from the fingertips of 12 healthy subjects. Extensive experiments, using different wavelength lights and white light with variation light intensities, respectively, reported in this paper, showed that almost all light spectra can acquire acceptable pulse rates, but only 470-, 490-, 505-, 590-, 600-, 610-, 625-, and 660-nm wavelength lights showed better performance in PPGi waveform compared with gold standard. With lower light intensity, the light spectra >600 nm still showed better performance. The change in pulse amplitude (ac) and dc amplitude was also investigated with the different light intensity and light spectra. With increasing light intensity, the dc amplitude increased, whereas ac component showed an initial increase followed by a decrease. Most of the subjects achieved their maximum averaging ac output when averaging dc output was in the range from 180 to 220 pixel values (8 b, 255 maximum pixel value). The results suggested that an adaptive solution could be developed to optimize the design of PPGi-based physiological signal acquisition devices in different light conditions.

  19. Fast high-resolution characterization of powders using an imaging plate Guinier camera

    NASA Astrophysics Data System (ADS)

    Gal, Joseph; Mogilanski, Dmitry; Nippus, Michael; Zabicky, Jacob; Kimmel, Giora

    2005-10-01

    A new Huber Guinier camera G670 was installed on an Ultrax18-Rigaku X-ray rotating Cu anode source, with a monochromator (focal length B=360 mm) providing pure Kα 1 radiation. The camera is used for powder diffraction applying transmission geometry. An imaging plate (IP) brings about position-sensitive detection measurement. In order to evaluate this new instrumental setup, quality data were collected for some classical reference materials such as silicon, quartz, some standards supplied by NIST USA and ceramic oxides synthesized in our laboratory. Each sample was measured at 4 kW for 1-2 min at 2 θ from 0 to 100°. The results were compared with published references. The following desirable features are noted for the instrumental combination studied: production of high quality X-ray data at a very fast rate, very accurate intensity measurements, sharp diffraction patterns due to small instrumental broadening and a pure monochromatic beam, and small position errors for 2 θ from 4 to 80°. There is no evidence for extra line broadening by the IP camera detector setup. It was found that the relatively high instrumental background can be easily dealt with and does not pose difficulty in the analysis of the data. However, fluorescence cannot be filtered.

  20. Intensified array camera imaging of solid surface combustion aboard the NASA Learjet

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1992-01-01

    An intensified array camera has been used to image weakly luminous flames spreading over thermally thin paper samples in a low-gravity environment aboard the NASA-Lewis Learjet. The aircraft offers 10 to 20 sec of reduced gravity during execution of a Keplerian trajectory and allows the use of instrumentation that is delicate or requires higher electrical power than is available in drop towers. The intensified array camera is a charge intensified device type that responds to light between 400 and 900 nm and has a minimum sensitivity of 10(exp 6) footcandles. The paper sample, either ashless filter paper or a lab wiper, burns inside a sealed chamber which is filled with 21, 18, or 15 pct. oxygen in nitrogen at one atmosphere. The camera views the edge of the paper and its output is recorded on videotape. Flame positions are measured every 0.1 sec to calculate flame spread rates. Comparisons with drop tower data indicate that the flame shapes and spread rates are affected by the residual g level in the aircraft.

  1. Intensified array camera imaging of solid surface combustion aboard the NASA Learjet

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1992-01-01

    An intensified array camera was used to image weakly luminous flames spreading over thermally thin paper samples in a low gravity environment aboard the NASA-Lewis Learjet. The aircraft offers 10 to 20 sec of reduced gravity during execution of a Keplerian trajectory and allows the use of instrumentation that is delicate or requires higher electrical power than is available in drop towers. The intensified array camera is a charge intensified device type that responds to light between 400 and 900 nm and has a minimum sensitivity of 10(exp 6) footcandles. The paper sample, either ashless filter paper or a lab wiper, burns inside a sealed chamber which is filled with 21, 18, or 15 pct. oxygen in nitrogen at one atmosphere. The camera views the edge of the paper and its output is recorded on videotape. Flame positions are measured every 0.1 sec to calculate flame spread rates. Comparisons with drop tower data indicate that the flame shapes and spread rates are affected by the residual g level in the aircraft.

  2. Correlating objective and subjective evaluation of texture appearance with applications to camera phone imaging

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.

    2009-01-01

    Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.

  3. Using different interpolation techniques in unwrapping the distorted images from panoramic annular lens camera

    NASA Astrophysics Data System (ADS)

    Yu, Guo; Fu, Lingjin; Bai, Jian

    2010-11-01

    The camera using panoramic annular lens (PAL) can capture the surrounding scene in a view of 360° without any scanning component. Due to severe distortions, the image formed by PAL must be unwrapped into a perspective-view image in order to get consistency with the human's visual custom. However the unfilled pixels would probably exist after unwrapping as a result of the non-uniform resolution in the PAL image, hence the interpolation should be employed in the phase of the forward projection unwrapping. We also evaluated the performance of several interpolation techniques for unwrapping the PAL image on a series of frequency-patterned images as a simulation by using three image quality indexes: MSE, SSIM and S-CIELAB. The experiment result revealed that those interpolation methods had better capability for the low frequent PAL images. The Bicubic, Ferguson and Newton interpolations performed relatively better at higher frequencies, while Bilinear and Bezier could achieve better result at lower frequency. Besides, the Nearest method had poorest performance in general and the Ferguson interpolation was excellent in both high and low frequencies.

  4. Evaluation of a large format image tube camera for the shuttle sortie mission

    NASA Technical Reports Server (NTRS)

    Tifft, W. C.

    1976-01-01

    A large format image tube camera of a type under consideration for use on the Space Shuttle Sortie Missions is evaluated. The evaluation covers the following subjects: (1) resolving power of the system (2) geometrical characteristics of the system (distortion etc.) (3) shear characteristics of the fiber optic coupling (4) background effects in the tube (5) uniformity of response of the tube (as a function of wavelength) (6) detective quantum efficiency of the system (7) astronomical applications of the system. It must be noted that many of these characteristics are quantitatively unique to the particular tube under discussion and serve primarily to suggest what is possible with this type of tube.

  5. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  6. Plume Imaging Using an IR Camera to Estimate Sulphur Dioxide Flux on Volcanoes of Northern Chile

    NASA Astrophysics Data System (ADS)

    Rosas Sotomayor, F.; Amigo, A.

    2014-12-01

    Remote sensing is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult or during volcanic crisis. In recent years, a ground-based infrared camera (NicAir) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. NicAir cameras have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. This contribution focuses on series of measurements done in December 2013 in volcanoes of northern Chile, in particular Láscar, Irruputuncu and Ollagüe, which are characterized by persistent quiescent degassing. During fieldwork, plumes from all three volcanoes showed regular behavior and the atmospheric conditions were very favorable (cloud-free and dry air). Four, two and one sets of measurements, up to 100 images each, were taken for Láscar, Irruputuncu and Ollagüe volcano, respectively. Matlab software was used for image visualizing and processing of the raw data. For instance, data visualization is performed through Matlab IPT functions imshow() and imcontrast(), and one algorithm was created for extracting necessary metadata. Image processing considers radiation at 8.6 and 10 μm wavelengths, due to differential SO2 and water vapor absorption. Calibration was performed in the laboratory through a detector correlation between digital numbers (raw data in image pixel values) and spectral radiance, and also in the field considering the camera self-emissions of infrared radiation. A gradient between the plume core and plume rim is expected, due to quick reaction of sulphur dioxide with water vapor, therefore a flux underestimate is also expected. Results will be compared with other SO2 remote sensing instruments such as DOAS and UV-camera. The implementation of this novel technique in Chilean volcanoes will be a major advance in our understanding of volcanic emissions and is also a strong complement for gas monitoring in active volcanoes such as Láscar, Villarrica, Lastarria, Cordón Caulle, among others and in rough volcanic terrains, due to its portability, easy operation, fast data acquisition and data processing.

  7. The REM-IR camera: High quality near infrared imaging with a small robotic telescope

    NASA Astrophysics Data System (ADS)

    Vitali, Fabrizio; Zerbi, Filippo M.; Chincarini, Guido; Ghisellini, Gabriele; Rodono, Marcello; Tosti, Gino; Antonelli, Lucio A.; Conconi, Paolo; Covino, Stefano; Cutispoto, Giuseppe; Molinari, Emilio; Nicastro, Luciano; Palazzi, Eliana

    2003-03-01

    We present the near infrared camera REM-IR that will operate aboard the REM telescope, intended as a fully automated instrument to follow-up Gamma Ray Burst, triggered mainly by satellites, such as HETE II, INTEGRAL, AGILE and SWIFT. REM-IR will perform high efficiency imaging of the prompt infrared afterglow of GRB and, together with the optical spectrograph ROSS, will cover simultaneously a wide wavelength range, allowing a better understanding of the intriguing scientific case of GRB. Due to the scientific and technological requirements of the REM project, some innovative solutions has been adopted in REM-IR.

  8. Update and image quality error budget for the LSST camera optical design

    NASA Astrophysics Data System (ADS)

    Bauman, Brian J.; Bowden, Gordon; Ku, John; Nordby, Martin; Olivier, Scot; Riot, Vincent; Rasmussen, Andrew; Seppala, Lynn; Xiao, Hong; Nurita, Nadine; Gilmore, David; Kahn, Steven

    2010-07-01

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a refractive camera design with 3 lenses (0.69-1.55m) and a set of broadband filters/corrector lenses. Performance is excellent over a 9.6 square degree field and ultraviolet to near infrared wavelengths. We describe the image quality error budget analysis methodology which includes effects from optical and optomechanical considerations such as index inhomogeneity, fabrication and null-testing error, temperature gradients, gravity, pressure, stress, birefringence, and vibration.

  9. Planetary Camera imaging of the counter-rotating core galaxy NGC 4365

    NASA Technical Reports Server (NTRS)

    Forbes, Duncan A.

    1994-01-01

    We analyze F555W(V) band Planetary Camera images of NGC 4365, for which ground-based spectroscopy has revealed a misaligned, counter-rotating core. Line profile analysis by Surma indicates that the counter-rotating component has a disk structure. After deconvolution and galaxy modeling, we find photometric evidence, at small radii to support this claim. There is no indication of a central point source or dust lane. The surface brightness profile reveals a steep outer profile and shallow, by not flat, inner profile with the inflection radius occurring at 1.8 sec. The inner profile is consistent with a cusp.

  10. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  11. New Mars Camera's First Image of Mars from Mapping Orbit (Full Frame)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The high resolution camera on NASA's Mars Reconnaissance Orbiter captured its first image of Mars in the mapping orbit, demonstrating the full resolution capability, on Sept. 29, 2006. The High Resolution Imaging Science Experiment (HiRISE) acquired this first image at 8:16 AM (Pacific Time). With the spacecraft at an altitude of 280 kilometers (174 miles), the image scale is 25 centimeters per pixel (10 inches per pixel). If a person were located on this part of Mars, he or she would just barely be visible in this image.

    The image covers a small portion of the floor of Ius Chasma, one branch of the giant Valles Marineris system of canyons. The image illustrates a variety of processes that have shaped the Martian surface. There are bedrock exposures of layered materials, which could be sedimentary rocks deposited in water or from the air. Some of the bedrock has been faulted and folded, perhaps the result of large-scale forces in the crust or from a giant landslide. The image resolves rocks as small as small as 90 centimeters (3 feet) in diameter. It includes many dunes or ridges of windblown sand.

    This image (TRA_000823_1720) was taken by the High Resolution Imaging Science Experiment camera onboard the Mars Reconnaissance Orbiter spacecraft on Sept. 29, 2006. Shown here is the full image, centered at minus 7.8 degrees latitude, 279.5 degrees east longitude. The image is oriented such that north is to the top. The range to the target site was 297 kilometers (185.6 miles). At this distance the image scale is 25 centimeters (10 inches) per pixel (with one-by-one binning) so objects about 75 centimeters (30 inches) across are resolved. The image was taken at a local Mars time of 3:30 PM and the scene is illuminated from the west with a solar incidence angle of 59.7 degrees, thus the sun was about 30.3 degrees above the horizon. The season on Mars is northern winter, southern summer.

    [Photojournal note: Due to the large sizes of the high-resolution TIFF and JPEG files, some systems may experience extremely slow downlink time while viewing or downloading these images; some systems may be incapable of handling the download entirely.]

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The HiRISE camera was built by Ball Aerospace & Technologies Corporation, Boulder, Colo., and is operated by the University of Arizona, Tucson.

  12. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar energy in the atmosphere after this time. The results show the potential of these instruments to determine cloud base heights on prolonged time intervals. The continuous operation of these instruments is implemented to gather seasonal variation of cloud base heights in this part of the world and to add to the much-needed dataset for future climate studies in Manila Observatory.

  13. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  14. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  15. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  16. Efficient smart CMOS camera based on FPGAs oriented to embedded image processing.

    PubMed

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  17. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  18. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera.

    PubMed

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  19. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera

    PubMed Central

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  20. High-resolution imaging of the Pluto-Charon system with the Faint Object Camera of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.

    1994-01-01

    Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.

  1. Characterization of gravity waves at Venus cloud top from the Venus Monitoring Camera images

    NASA Astrophysics Data System (ADS)

    Piccialli, A.; Titov, D.; Svedhem, H.; Markiewicz, W. J.

    2012-04-01

    Since 2006 the European mission Venus Express (VEx) is studying Venus atmosphere with a focus on atmospheric dynamics and circulation. Recently, several experiments on board Venus Express have detected waves in the Venus atmosphere both as oscillations in the temperature and wind fields and as patterns on the cloud layer. Waves could be playing an important role in the maintenance of the atmospheric circulation of Venus since they can transport energy and momentum. High resolution images of Venus Northern hemisphere obtained with the Venus Monitoring Camera (VMC/VEx) show distinct wave patterns at the cloud tops (~70 km altitude) interpreted as gravity waves. Venus Monitoring Camera (VMC) is a CCD-based camera specifically designed to take images of Venus in four narrow band filters in UV (365 nm), visible (513 nm), and near-IR (965 and 1000 nm). A systematic visual search of waves in VMC images was performed; more than 1700 orbits were analyzed and wave patterns were observed in about 200 images. With the aim to characterize the wave types and their possible origin, we retrieved wave properties such as location (latitude and longitude), local time, solar zenith angle, packet length and width, and orientation. A wavelet analysis was also applied to determine the wavelength and the region of dominance of each wave. Four types of waves were identified in VMC images: long, medium, short and irregular waves. The long type waves are characterized by long and narrow straight features extending more than a few hundreds kilometers and with a wavelength within the range of 7 to 48 km. Medium type waves have irregular wavefronts extending more than 100 km and with wavelengths in the range 8 - 21 km. Short wave packets have a width of several tens of kilometers and extends to few hundreds kilometers and are characterized by small wavelengths (3 - 16 km). Often short waves trains are observed at the edges of long features and seem connected to them. Irregular wave fields extend beyond the field of view of VMC and appear to be the result of wave breaking or wave interference. The waves are often identified in all channels and are mostly found at high latitudes (60-80°N) in the Northern hemisphere and seem to be concentrated above Ishtar Terra, a continental size highland that includes the highest mountain belts of the planet, thus suggesting a possible orographic origin of the waves. However, at the moment it is not possible to rule out a bias in the observations due to the spacecraft orbit that prevents waves to be seen at lower latitudes, because of lower resolution, and on the night side of the planet.

  2. Demonstration of near-infrared thermography with silicon image sensor cameras

    NASA Astrophysics Data System (ADS)

    Rotrou, Yann; Sentenac, Thierry; Le Maoult, Yannick; Magnan, Pierre; Farre, Jean A.

    2005-03-01

    This paper presents a thermal measurement system based on a Silicon image sensor camera operating in the Near Infrared spectral band (0.7-1.1 μm). The goal of the study is to develop a low-cost imaging system which provides an accurate measurement of temperature. A radiometric model is proposed to characterize the camera response by using physical parameters considering the specific spectral band used. After a calibration procedure of the model, measurements of black body temperatures ranging from 300 to 1000°C has been performed. The Noise Equivalent Temperature Difference (NETD) is lower than +/- 0.18°C at a black body temperature of 600°C. Accurate measurements are provided over the whole temperature range by introducing an automatic exposure time control. The exposure time is adjusted for each frame along the evolution of temperature in order to optimize the temperature sensitivity and the signal-to-noise ratio. The paper also describes the conversion process of the apparent black body temperature to the real temperature of the observed object using its emissivity and surface geometry. The overall method is depicted and the influence of each parameter is analyzed by computing the resulting temperature uncertainty. Finally, preliminary experimental results are presented for monitoring real temperature of moulds in a Super Forming Process (SPF).

  3. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw; Umeno, Marc M.

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  4. Color video camera capable of 1,000,000 fps with triple ultrahigh-speed image sensors

    NASA Astrophysics Data System (ADS)

    Maruyama, Hirotaka; Ohtake, Hiroshi; Hayashida, Tetsuya; Yamada, Masato; Kitamura, Kazuya; Arai, Toshiki; Tanioka, Kenkichi; Etoh, Takeharu G.; Namiki, Jun; Yoshida, Tetsuo; Maruno, Hiromasa; Kondo, Yasushi; Ozaki, Takao; Kanayama, Shigehiro

    2005-03-01

    We developed an ultrahigh-speed, high-sensitivity, color camera that captures moving images of phenomena too fast to be perceived by the human eye. The camera operates well even under restricted lighting conditions. It incorporates a special CCD device that is capable of ultrahigh-speed shots while retaining its high sensitivity. Its ultrahigh-speed shooting capability is made possible by directly connecting CCD storages, which record video images, to photodiodes of individual pixels. Its large photodiode area together with the low-noise characteristic of the CCD contributes to its high sensitivity. The camera can clearly capture events even under poor light conditions, such as during a baseball game at night. Our camera can record the very moment the bat hits the ball.

  5. Real time plume and laser spot recognition in IR camera images

    SciTech Connect

    Moore, K.R.; Caffrey, M.P.; Nemzek, R.J.; Salazar, A.A.; Jeffs, J.; Andes, D.K.; Witham, J.C.

    1997-08-01

    It is desirable to automatically guide the laser spot onto the effluent plume for maximum IR DIAL system sensitivity. This requires the use of a 2D focal plane array. The authors have demonstrated that a wavelength-filtered IR camera is capable of 2D imaging of both the plume and the laser spot. In order to identify the centers of the plume and the laser spot, it is first necessary to segment these features from the background. They report a demonstration of real time plume segmentation based on velocity estimation. They also present results of laser spot segmentation using simple thresholding. Finally, they describe current research on both advanced segmentation and recognition algorithms and on reconfigurable real time image processing hardware based on field programmable gate array technology.

  6. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  7. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  8. Design and fabrication of MEMS-based thermally-actuated image stabilizer for cell phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    A micro-electro-mechanical system (MEMS)-based image stabilizer is proposed to counteracting shaking in cell phone cameras. The proposed stabilizer (dimensions, 8.8 8.8 0.2 mm3) includes a two-axis decoupling XY stage and has sufficient strength to suspend an image sensor (IS) used for anti-shaking function. The XY stage is designed to send electrical signals from the suspended IS by using eight signal springs and 24 signal outputs. The maximum actuating distance of the stage is larger than 25 ?m, which is sufficient to resolve the shaking problem. Accordingly, the applied voltage for the 25 ?m moving distance is lower than 20 V; the dynamic resonant frequency of the actuating device is 4485 Hz, and the rising time is 21 ms.

  9. High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Elgner, S.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2013-09-01

    The Dawn Framing Camera (FC) acquired close to 10,000 clear filter images of Vesta with a resolution of about 20 m/pixel during the Low Altitude Mapping Orbit (LAMO) between December 2011 and April 2012. We ortho-rectified these images and produced a global high-resolution uncontrolled mosaic of Vesta. This global mosaic is the baseline for a high-resolution Vesta atlas that consists of 30 tiles mapped at a scale between 1:200,000 and 1:225,180. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The whole atlas is available to the public through the Dawn GIS web page [http://dawn_gis.dlr.de/atlas].

  10. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  11. A two camera video imaging system with application to parafoil angle of attack measurements

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1991-01-01

    This paper describes the development of a two-camera, video imaging system for the determination of three-dimensional spatial coordinates from stereo images. This system successfully measured angle of attack at several span-wise locations for large-scale parafoils tested in the NASA Ames 80- by 120-Foot Wind Tunnel. Measurement uncertainty for angle of attack was less than 0.6 deg. The stereo ranging system was the primary source for angle of attack measurements since inclinometers sewn into the fabric ribs of the parafoils had unknown angle offsets acquired during installation. This paper includes discussions of the basic theory and operation of the stereo ranging system, system measurement uncertainty, experimental set-up, calibration results, and test results. Planned improvements and enhancements to the system are also discussed.

  12. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  13. Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror.

    PubMed

    Li, Weiming; Li, Y F

    2011-03-28

    This paper presents a panoramic stereo imaging system which uses a single camera coaxially combined with a fisheye lens and a convex mirror. It provides the design methodology, trade analysis, and experimental results using commercially available components. The trade study shows the design equations and the various tradeoffs that must be made during design. The system's novelty is that it provides stereo vision over a full 360-degree horizontal field-of-view (FOV). Meanwhile, the entire vertical FOV is enlarged compared to the existing systems. The system is calibrated with a computational model that can accommodate the non-single viewpoint imaging cases to conduct 3D reconstruction in Euclidean space. PMID:21451610

  14. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera

    PubMed Central

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets. PMID:22303163

  15. Reconstruction of refocusing and all-in-focus images based on forward simulation model of plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhang, Rumin; Liu, Peng; Liu, Dijun; Su, Guobin

    2015-12-01

    In this paper, we establish a forward simulation model of plenoptic camera which is implemented by inserting a micro-lens array in a conventional camera. The simulation model is used to emulate how the space objects at different depths are imaged by the main lens then remapped by the micro-lens and finally captured on the 2D sensor. We can easily modify the parameters of the simulation model such as the focal lengths and diameters of the main lens and micro-lens and the number of micro-lens. Employing the spatial integration, the refocused images and all-in-focus images are rendered based on the plenoptic images produced by the model. The forward simulation model can be used to determine the trade-offs between different configurations and to test any new researches related to plenoptic camera without the need of prototype.

  16. Implementation of a continuous scanning procedure and a line scan camera for ?thin-sheet laser imaging microscopy

    PubMed Central

    Schacht, Peter; Johnson, Shane B.; Santi, Peter A.

    2010-01-01

    We report development of a continuous scanning procedure and the use of a time delay integration (TDI) line scan camera for a light-sheet based microscope called a thin-sheet laser imaging microscope (TSLIM). TSLIM is an optimized version of a light-sheet fluorescent microscope that previously used a start/stop scanning procedure to move the specimen through the thinnest portion of a light-sheet and stitched the image columns together to produce a well-focused composite image. In this paper, hardware and software enhancements to TSLIM are described that allow for dual sided, dual illumination lasers, and continuous scanning of the specimen using either a full-frame CCD camera and a TDI line scan camera. These enhancements provided a ~70% reduction in the time required for composite image generation and a ~63% reduction in photobleaching of the specimen compared to the start/stop procedure. PMID:21258493

  17. In situ X-ray beam imaging using an off-axis magnifying coded aperture camera system.

    PubMed

    Kachatkou, Anton; Kyele, Nicholas; Scott, Peter; van Silfhout, Roelof

    2013-07-01

    An imaging model and an image reconstruction algorithm for a transparent X-ray beam imaging and position measuring instrument are presented. The instrument relies on a coded aperture camera to record magnified images of the footprint of the incident beam on a thin foil placed in the beam at an oblique angle. The imaging model represents the instrument as a linear system whose impulse response takes into account the image blur owing to the finite thickness of the foil, the shape and size of camera's aperture and detector's point-spread function. The image reconstruction algorithm first removes the image blur using the modelled impuls