Science.gov

Sample records for camera lroc images

  1. LROC - Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Bowman-Cisneros, E.; Brylow, S. M.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A. S.; Malin, M. C.; Roberts, D.; Thomas, P. C.; Turtle, E.

    2006-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is designed to address two of the prime LRO measurement requirements. 1) Assess meter and smaller-scale features to facilitate safety analysis for potential lunar landing sites near polar resources, and elsewhere on the Moon. 2) Acquire multi-temporal synoptic imaging of the poles every orbit to characterize the polar illumination environment (100 m scale), identifying regions of permanent shadow and permanent or near-permanent illumination over a full lunar year. The LROC consists of two narrow-angle camera components (NACs) to provide 0.5-m scale panchromatic images over a 5-km swath, a wide-angle camera component (WAC) to provide images at a scale of 100 and 400 m in seven color bands over a 100-km swath, and a common Sequence and Compressor System (SCS). In addition to acquiring the two LRO prime measurement sets, LROC will return six other high-value datasets that support LRO goals, the Robotic Lunar Exploration Program (RLEP), and basic lunar science. These additional datasets include: 3) meter-scale mapping of regions of permanent or near-permanent illumination of polar massifs; 4) multiple co-registered observations of portions of potential landing sites and elsewhere for derivation of high-resolution topography through stereogrammetric and photometric stereo analyses; 5) a global multispectral map in 7 wavelengths (300-680 nm) to characterize lunar resources, in particular ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60-80) favorable for morphologic interpretations; 7) sub-meter imaging of a variety of geologic units to characterize physical properties, variability of the regolith, and key science questions; and 8) meter-scale coverage overlapping with Apollo era Panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972, to ascertain hazards for future surface operations and interplanetary travel.

  2. NASA's Lunar Reconnaissance Orbiter Cameras (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, M.; McEwen, A.; Eliason, E.; Joliff, B.; Hiesinger, H.; Malin, M.; Thomas, P.; Turtle, E.; Brylow, S.

    The Lunar Reconnaissance Orbiter LRO mission is scheduled to launch in the fall of 2008 as part of NASA s Robotic Lunar Exploration Program and is the first spacecraft to be built as part of NASA s Vision for Space Exploration The orbiter will be equipped with seven scientific instrument packages one of which is LROC The Lunar Reconnaissance Orbiter Camera LROC has been designed to address two of LRO s primary measurement objectives landing site certification and monitoring of polar illumination In order to examine potential landing sites high-resolution images 0 5 m pixel will be used to assess meter-scale features near the pole and other regions on the lunar surface The LROC will also acquire 100 m pixel images of the polar regions of the Moon during each orbit for a year to identify areas of permanent shadow and permanent or near-permanent illumination In addition to these two main objectives the LROC team also plans to conduct meter-scale monitoring of polar regions under varying illumination angles acquire overlapping observations to enable derivation of meter-scale topography acquire global multispectral imaging to map ilmenite and other minerals derive a global morphology base map characterize regolith properties and determine current impact hazards by re-imaging areas covered by Apollo images to search for newly-formed impact craters The LROC is a modified version of the Mars Reconnaissance Orbiter s Context Camera and Mars Color Imager The LROC will be made up of four optical elements two identical narrow-angle telescopes

  3. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  4. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2015-09-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600-2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  5. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  6. Regolith thickness estimation over Sinus Iridum using morphology of small craters from LROC images

    NASA Astrophysics Data System (ADS)

    Liu, T.; Fa, W.

    2013-09-01

    Regolith thickness over Sinus Iridum region is estimated using morphology and size-frequency distribution of small craters that are counted from Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NACs) images. Results show that regolith thickness for Sinus Iridum is from 2 m to more than 10 m, with a medium value between 4.1 m and 6.1 m.

  7. Preliminary Mapping of Permanently Shadowed and Sunlit Regions Using the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Koeber, S.; Robinson, M. S.

    2010-12-01

    The spin axis of the Moon is tilted by only 1.5 (compared with the Earth's 23.5), leaving some areas near the poles in permanent shadow while other nearby regions remain sunlit for a majority of the year. Theory, radar data, neutron measurements, and Lunar CRater Observation and Sensing Satellite (LCROSS) observations suggest that volatiles may be present in the cold traps created inside these permanently shadowed regions. While areas of near permanent illumination are prime locations for future lunar outposts due to benign thermal conditions and near constant solar power. The Lunar Reconnaissance Orbiter (LRO) has two imaging systems that provide medium and high resolution views of the poles. During almost every orbit the LROC Wide Angle Camera (WAC) acquires images at 100 m/pixel of the polar region (80 to 90 north and south latitude). In addition, the LROC Narrow Angle Camera (NAC) targets selected regions of interest at 0.7 to 1.5 m/pixel [Robinson et al., 2010]. During the first 11 months of the nominal mission, LROC acquired almost 6,000 WAC images and over 7,300 NAC images of the polar region (i.e., within 2 of pole). By analyzing this time series of WAC and NAC images, regions of permanent shadow and permanent, or near-permanent illumination can be quantified. The LROC Team is producing several reduced data products that graphically illustrate the illumination conditions of the polar regions. Illumination movie sequences are being produced that show how the lighting conditions change over a calendar year. Each frame of the movie sequence is a polar stereographic projected WAC image showing the lighting conditions at that moment. With the WACs wide field of view (~100 km at an altitude of 50 km), each frame has repeat coverage between 88 and 90 at each pole. The same WAC images are also being used to develop multi-temporal illumination maps that show the percent each 100 m 100 m area is illuminated over a period of time. These maps are derived by stacking all the WAC frames, selecting a threshold to determine if the surface is illuminated, and summing the resulting binary images. In addition, mosaics of NAC images are also being produced for regions of interest at a scale of 0.7 to 1.5 m/pixel. The mosaics produced so far have revealed small illuminated surfaces on the tens of meters scale that were previously thought to be shadowed during that time. The LROC dataset of the polar regions complements previous illumination analysis of Clementine images [Bussey et al., 1999], Kaguya topography [Bussey et al., 2010], and the current efforts underway by the Lunar Orbiter Laser Altimeter (LOLA) Team [Mazarico et al., 2010] and provide an important new dataset for science and exploration. References: Bussey et al. (1999), Illumination conditions at the lunar south pole, Geophysical Research Letters, 26(9), 1187-1190. Bussey et al. (2010), Illumination conditions of the south pole of the Moon derived from Kaguya topography, Icarus, 208, 558-564. Mazarico et al. (2010), Illumination of the lunar poles from the Lunar Orbiter Laser Altimeter (LOLA) Topography Data, paper presented at 41st LPSC, Houston, TX. Robinson et al. (2010), Lunar Reconnaissance Orbiter Camera (LROC) Instrument Overview, Space Sci Rev, 150, 81-124.

  8. LROC NAC Stereo Anaglyphs

    NASA Astrophysics Data System (ADS)

    Mattson, S.; McEwen, A. S.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) acquires high resolution (50 to 200 cm pixel scale) images of the Moon. In operation since June 2009, LROC NAC acquires geometric stereo pairs by rolling off-nadir on subsequent orbits. A new automated processing system currently in development will produce anaglyphs from most of the NAC geometric stereo pairs. An anaglyph is an image formed by placing one image from the stereo pair in the red channel, and the other image from the stereo pair in the green and blue channels, so that together with red-blue or red-cyan glasses, the 3D information in the pair can be readily viewed. These new image products will make qualitative interpretation of the lunar surface in 3D more accessible, without the need for intensive computational resources or special equipment. The LROC NAC is composed of two separate pushbroom CCD cameras (NAC L and R) aligned to increase the full swath width to 5 km from an altitude of 50 km. Development of the anaglyph processing system incorporates stereo viewing geometry, proper alignment of the NAC L and R frames, and optimal contrast normalization of the stereo pair to minimize extreme brightness differences, which can make stereo viewing difficult in an anaglyph. The LROC NAC anaglyph pipeline is based on a similar automated system developed for the HiRISE camera, on the Mars Reconnaissance Orbiter. Improved knowledge of camera pointing and spacecraft position allows for the automatic registration of the L and R frames by map projecting them to a polar stereographic projection. One half of the stereo pair must then be registered to the other so there is no offset in the vertical (y) direction. Stereo viewing depends on parallax only in the horizontal (x) direction. High resolution LROC NAC anaglyphs will be made available to the lunar science community and to the public on the LROC web site (http://lroc.sese.asu.edu).

  9. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, Mark; Hiesinger, Harald; McEwen, Alfred; Jolliff, Brad; Thomas, Peter C.; Turtle, Elizabeth; Eliason, Eric; Malin, Mike; Ravine, A.; Bowman-Cisneros, Ernest

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping mission in a quasi-circular 50 km orbit. A multi-year extended mission in a fixed 30×200 km orbit is optional. The Lunar Reconnaissance Orbiter Camera (LROC) consists of a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). The WAC is a 7-color push-frame camera, which images the Moon at 100 and 400 m/pixel in the visible and UV, respectively, while the two NACs are monochrome narrow-angle linescan imagers with 0.5 m/pixel spatial resolution. LROC was specifically designed to address two of the primary LRO mission requirements and six other key science objectives, including 1) assessment of meter-and smaller-scale features in order to select safe sites for potential lunar landings near polar resources and elsewhere on the Moon; 2) acquire multi-temporal synoptic 100 m/pixel images of the poles during every orbit to unambiguously identify regions of permanent shadow and permanent or near permanent illumination; 3) meter-scale mapping of regions with permanent or near-permanent illumination of polar massifs; 4) repeat observations of potential landing sites and other regions to derive high resolution topography; 5) global multispectral observations in seven wavelengths to characterize lunar resources, particularly ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60° -80° ) favorable for morphological interpretations; 7) sub-meter imaging of a variety of geologic units to characterize their physical properties, the variability of the regolith, and other key science questions; 8) meter-scale coverage overlapping with Apollo-era panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972. LROC allows us to determine the recent impact rate of bolides in the size range of 0.5 to 10 meters, which is currently not well known. Determining the impact rate at these sizes enables engineering remediation measures for future surface operations and interplanetary travel. The WAC has imaged nearly the entire Moon in seven wavelengths. A preliminary global WAC stereo-based topographic model is in preparation [1] and global color processing is underway [2]. As the mission progresses repeat global coverage will be obtained as lighting conditions change providing a robust photometric dataset. The NACs are revealing a wealth of morpho-logic features at the meter scale providing the engineering and science constraints needed to support future lunar exploration. All of the Apollo landing sites have been imaged, as well as the majority of robotic landing and impact sites. Through the use of off-nadir slews a collection of stereo pairs is being acquired that enable 5-m scale topographic mapping [3-7]. Impact mor-phologies (terraces, impact melt, rays, etc) are preserved in exquisite detail at all Copernican craters and are enabling new studies of impact mechanics and crater size-frequency distribution measurements [8-12]. Other topical studies including, for example, lunar pyroclastics, domes, and tectonics are underway [e.g., 10-17]. The first PDS data release of LROC data will be in March 2010, and will include all images from the commissioning phase and the first 3 months of the mapping phase. [1] Scholten et al. (2010) 41st LPSC, #2111; [2] Denevi et al. (2010a) 41st LPSC, #2263; [3] Beyer et al. (2010) 41st LPSC, #2678; [4] Archinal et al. (2010) 41st LPSC, #2609; [5] Mattson et al. (2010) 41st LPSC, #1871; [6] Tran et al. (2010) 41st LPSC, #2515; [7] Oberst et al. (2010) 41st LPSC, #2051; [8] Bray et al. (2010) 41st LPSC, #2371; [9] Denevi et al. (2010b) 41st LPSC, #2582; [10] Hiesinger et al. (2010a) 41st LPSC, #2278; [11] Hiesinger et al. (2010b) 41st LPSC, #2304; [12] van der Bogert et al. (2010) 41st LPSC, #2165; [13] Plescia et al. (2010) 41st LPSC, #2160; [14] Lawrence et al. (2010) 41st LPSC, #1906; [15] Gaddis et al. (2010) 41st LPSC, #2059; [16] Watters et al. (2010) 41st LPSC, #1863; [17] Garry et al. (2010) 41st LPSC, #2278.

  10. Occurrence probability of slopes on the lunar surface: Estimate by the shaded area percentage in the LROC NAC images

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.; Ivanov, M. A.; Kokhanov, A. A.; Karachevtseva, I. P.; Head, J. W.

    2015-09-01

    The paper describes the method of estimating the distribution of slopes by the portion of shaded areas measured in the images acquired at different Sun elevations. The measurements were performed for the benefit of the Luna-Glob Russian mission. The western ellipse for the spacecraft landing in the crater Bogus-lawsky in the southern polar region of the Moon was investigated. The percentage of the shaded area was measured in the images acquired with the LROC NAC camera with a resolution of ~0.5 m. Due to the close vicinity of the pole, it is difficult to build digital terrain models (DTMs) for this region from the LROC NAC images. Because of this, the method described has been suggested. For the landing ellipse investigated, 52 LROC NAC images obtained at the Sun elevation from 4° to 19° were used. In these images the shaded portions of the area were measured, and the values of these portions were transferred to the values of the occurrence of slopes (in this case, at the 3.5-m baseline) with the calibration by the surface characteristics of the Lunokhod-1 study area. For this area, the digital terrain model of the ~0.5-m resolution and 13 LROC NAC images obtained at different elevations of the Sun are available. From the results of measurements and the corresponding calibration, it was found that, in the studied landing ellipse, the occurrence of slopes gentler than 10° at the baseline of 3.5 m is 90%, while it is 9.6, 5.7, and 3.9% for the slopes steeper than 10°, 15°, and 20°, respectively. Obviously, this method can be recommended for application if there is no DTM of required granularity for the regions of interest, but there are high-resolution images taken at different elevations of the Sun.

  11. Photometric parameter maps of the Moon derived from LROC WAC images

    NASA Astrophysics Data System (ADS)

    Sato, H.; Robinson, M. S.; Hapke, B. W.; Denevi, B. W.; Boyd, A. K.

    2013-12-01

    Spatially resolved photometric parameter maps were computed from 21 months of Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) images. Due to a 60° field-of-view (FOV), the WAC achieves nearly global coverage of the Moon each month with more than 50% overlap from orbit-to-orbit. From the repeat observations at various viewing and illumination geometries, we calculated Hapke bidirectional reflectance model parameters [1] for 1°x1° "tiles" from 70°N to 70°S and 0°E to 360°E. About 66,000 WAC images acquired from February 2010 to October 2011 were converted from DN to radiance factor (I/F) though radiometric calibration, partitioned into gridded tiles, and stacked in a time series (tile-by-tile method [2]). Lighting geometries (phase, incidence, emission) were computed using the WAC digital terrain model (100 m/pixel) [3]. The Hapke parameters were obtained by model fitting against I/F within each tile. Among the 9 parameters of the Hapke model, we calculated 3 free parameters (w, b, and hs) by setting constant values for 4 parameters (Bco=0, hc=1, θ, φ=0) and interpolating 2 parameters (c, Bso). In this simplification, we ignored the Coherent Backscatter Opposition Effect (CBOE) to avoid competing CBOE and Shadow Hiding Opposition Effect (SHOE). We also assumed that surface regolith porosity is uniform across the Moon. The roughness parameter (θ) was set to an averaged value from the equator (× 3°N). The Henyey-Greenstein double lobe function (H-G2) parameter (c) was given by the 'hockey stick' relation [4] (negative correlation) between b and c based on laboratory measurements. The amplitude of SHOE (Bso) was given by the correlation between w and Bso at the equator (× 3°N). Single scattering albedo (w) is strongly correlated to the photometrically normalized I/F, as expected. The c shows an inverse trend relative to b due to the 'hockey stick' relation. The parameter c is typically low for the maria (0.08×0.06) relative to the highlands (0.47×0.16). Since c controls the fraction of backward/forward scattering in H-G2, lower c for the maria indicates more forward scattering relative to the highlands. This trend is opposite to what was expected because darker particles are usually more backscattering. However, the lower albedo of the maria is due to the higher abundance of ilmenite, which is an opaque mineral that scatters all of the light by specular reflection from the its surface. If their surface facets are relatively smooth the ilmenite particles will be forward scattering. Other factors (e.g. grain shape, grain size, porosity, maturity) besides the mineralogy might also be affecting c. The angular-width of SHOE (hs) typically shows lower values (0.047×0.02) for the maria relative to the highlands (0.074×0.025). An increase in hs for the maria theoretically suggests lower porosity or a narrower grain size distribution [1], but the link between actual materials and hs is not well constrained. Further experiments using both laboratory and spacecraft observations will help to unravel the photometric properties of the surface materials of the Moon. [1] Hapke, B.: Cambridge Univ. Press, 2012. [2] Sato, H. et al.: 42nd LPSC, abstract #1974, 2011. [3] Scholten, F. et al.: JGR, 117, E00H17, 2012. [4] Hapke, B.: Icarus, 221(2), p1079-1083, 2012.

  12. Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images

    NASA Astrophysics Data System (ADS)

    Singer, K. N.; Jolliff, B. L.; McKinnon, W. B.

    2013-12-01

    Title: Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images Authors: Kelsi N. Singer1, Bradley L. Jolliff1, and William B. McKinnon1 Affiliations: 1. Earth and Planetary Sciences, Washington University in St Louis, St. Louis, MO, United States. We report results from analyzing the size-velocity distribution (SVD) of secondary crater forming fragments from the 93 km diameter Copernicus impact. We measured the diameters of secondary craters and their distances from Copernicus using LROC Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) image data. We then estimated the velocity and size of the ejecta fragment that formed each secondary crater from the range equation for a ballistic trajectory on a sphere and Schmidt-Holsapple scaling relations. Size scaling was carried out in the gravity regime for both non-porous and porous target material properties. We focus on the largest ejecta fragments (dfmax) at a given ejection velocity (υej) and fit the upper envelope of the SVD using quantile regression to an equation of the form dfmax = A*υej ^- β. The velocity exponent, β, describes how quickly fragment sizes fall off with increasing ejection velocity during crater excavation. For Copernicus, we measured 5800 secondary craters, at distances of up to 700 km (15 crater radii), corresponding to an ejecta fragment velocity of approximately 950 m/s. This mapping only includes secondary craters that are part of a radial chain or cluster. The two largest craters in chains near Copernicus that are likely to be secondaries are 6.4 and 5.2 km in diameter. We obtained a velocity exponent, β, of 2.2 × 0.1 for a non-porous surface. This result is similar to Vickery's [1987, GRL 14] determination of β = 1.9 × 0.2 for Copernicus using Lunar Orbiter IV data. The availability of WAC 100 m/pix global mosaics with illumination geometry optimized for morphology allows us to update and extend the work of Vickery [1986, Icarus 67, and 1987], who compared secondary crater SVDs for craters on the Moon, Mercury, and Mars. Additionally, meter-scale NAC images enable characterization of secondary crater morphologies and fields around much smaller primary craters than were previously investigated. Combined results from all previous studies of ejecta fragment SVDs from secondary crater fields show that β ranges between approximately 1 and 3. First-order spallation theory predicts a β of 1 [Melosh 1989, Impact Cratering, Oxford Univ. Press]. Results in Vickery [1987] for the Moon exhibit a generally decreasing β with increasing primary crater size (5 secondary fields mapped). In the same paper, however, this trend is flat for Mercury (3 fields mapped) and opposite for Mars (4 fields mapped). SVDs for craters on large icy satellites (Ganymede and Europa), with gravities not too dissimilar to lunar gravity, show generally low velocity exponents (β between 1 and 1.5), except for the very largest impactor measured: the 585-km-diameter Gilgamesh basin on Ganymede (β = 2.6 × 0.4) [Singer et al., 2013, Icarus 226]. The present work, focusing initially on lunar craters using LROC data, will attempt to confirm or clarify these trends, and expand the number of examples under a variety of impact conditions and surface materials to evaluate possible causes of variations.

  13. Depths, Diameters, and Profiles of Small Lunar Craters From LROC NAC Stereo Images

    NASA Astrophysics Data System (ADS)

    Stopar, J. D.; Robinson, M.; Barnouin, O. S.; Tran, T.

    2010-12-01

    Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images (pixel scale ~0.5 m) provide new 3-D views of small craters (40m>D>200m). We extracted topographic profiles from 85 of these craters in mare and highland terrains between 18.1-19.1N and 5.2-5.4E to investigate relationships among crater shape, age, and target. Obvious secondary craters (e.g., clustered) and moderately- to heavily-degraded craters were excluded. The freshest craters included in the study have crisp rims, bright ejecta, and no superposed craters. The depth, diameter, and profiles of each crater were determined from a NAC-derived DTM (M119808916/M119815703) tied to LOLA topography with better than 1 m vertical resolution (see [1]). Depth/diameter ratios for the selected craters are generally between 0.12 and 0.2. Crater profiles were classified into one of 3 categories: V-shaped, U-shaped, or intermediate (craters on steep slopes were excluded). Craters were then morphologically classified according to [2], where crater shape is determined by changes in material strength between subsurface layers, resulting in bowl-shaped, flat-bottomed, concentric, or central-mound crater forms. In this study, craters with U-shaped profiles tend to be small (<60 m) and flat-bottomed, while V-shaped craters have steep slopes (~20), little to no floor, and a range of diameters. Both fresh and relatively degraded craters display the full range of profile shapes (from U to V and all stages in between). We found it difficult to differentiate U-shaped craters from V-shaped craters without the DTM, and we saw no clear correlation between morphologic and profile classification. Further study is still needed to increase our crater statistics and expand on the relatively small population of craters included here. For the craters in this study, we found that block abundances correlate with relative crater degradation state as defined by [3], where abundant blocks signal fresher craters; however, block abundances do not correlate with U- or V-shaped profiles. The craters examined here show that profile shape cannot be used to determine the relative age or degradation state as might be inferred from [4, for example]. The observed variability in crater profiles may be explained by local variations in regolith thickness [e.g., 2, 5], impactor velocity, and/or possibly bolide density. Ongoing efforts will quantify the possible effects of solitary secondary craters and investigate whether or not depth/diameter ratios and crater profiles vary between different regions of the Moon (thick vs thin regolith, highlands vs mare, and old vs young mare). References: [1] Tran T. et al. (2010) LPSC XXXXI, Abstract 2515. [2] Quaide W. L. and V. R. Oberbeck (1968) JGR, 73: 5247-5270. [3] Basilevsky A. T. (1976) Proc LPSC 7th, p. 1005-1020. [4] Soderblom L. A. and L. A. Lebofsky (1972) JGR, 77: 279-296. [5] Wilcox B. B. et al. (2005) Met. Planet. Sci., 40: 695-710.

  14. LROC Advances in Lunar Science

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.

    2012-12-01

    Since entering orbit in 2009 the Lunar Reconnaissance Orbiter Camera (LROC) has acquired over 700,000 Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) images of the Moon. This new image collection is fueling research into the origin and evolution of the Moon. NAC images revealed a volcanic complex 35 x 25 km (60N, 100E), between Compton and Belkovich craters (CB). The CB terrain sports volcanic domes and irregular depressed areas (caldera-like collapses). The volcanic complex corresponds to an area of high-silica content (Diviner) and high Th (Lunar Prospector). A low density of impact craters on the CB complex indicates a relatively young age. The LROC team mapped over 150 volcanic domes and 90 volcanic cones in the Marius Hills (MH), many of which were not previously identified. Morphology and compositional estimates (Diviner) indicate that MH domes are silica poor, and are products of low-effusion mare lavas. Impact melt deposits are observed with Copernican impact craters (>10 km) on exterior ejecta, the rim, inner wall, and crater floors. Preserved impact melt flow deposits are observed around small craters (25 km diam.), and estimated melt volumes exceed predictions. At these diameters the amount of melt predicted is small, and melt that is produced is expected to be ejected from the crater. However, we observe well-defined impact melt deposits on the floor of highland craters down to 200 m diameter. A globally distributed population of previously undetected contractional structures were discovered. Their crisp appearance and associated impact crater populations show that they are young landforms (<1 Ga). NAC images also revealed small extensional troughs. Crosscutting relations with small-diameter craters and depths as shallow as 1 m indicate ages <50 Ma. These features place bounds on the amount of global radial contraction and the level of compressional stress in the crust. WAC temporal coverage of the poles allowed quantification of highly illuminated regions, including one site that remains lit for 94% of a year (longest eclipse period of 43 hours). Targeted NAC images provide higher resolution characterization of key sites with permanent shadow and extended illumination. Repeat WAC coverage provides an unparalleled photometric dataset allowing spatially resolved solutions (currently 1 degree) to Hapke's photometric equation - data invaluable for photometric normalization and interpreting physical properties of the regolith. The WAC color also provides the means to solve for titanium, and distinguish subtle age differences within Copernican aged materials. The longevity of the LRO mission allows follow up NAC and WAC observations of previously known and newly discovered targets over a range of illumination and viewing geometries. Of particular merit is the acquisition of NAC stereo pairs and oblique sequences. With the extended SMD phase, the LROC team is working towards imaging the whole Moon with pixel scales of 50 to 200 cm.

  15. Using LROC analysis to evaluate detection accuracy of microcalcification clusters imaged with flat-panel CT mammography

    NASA Astrophysics Data System (ADS)

    Gong, Xing; Glick, Stephen J.; Vedula, Aruna A.

    2004-05-01

    The purpose of this study is to investigate the detectability of microcalcification clusters (MCCs) using CT mammography with a flat-panel detector. Compared with conventional mammography, CT mammography can provide improved discrimination between malignant and benign cases as it can provide the radiologist with more accurate morphological information on MCCs. In this study, two aspects of MCC detection with flat-panel CT mammography were examined: (1) the minimal size of MCCs detectable with mean glandular dose (MGD) used in conventional mammography; (2) the effect of different detector pixel size on the detectability of MCCs. A realistic computer simulation modeling x-ray transport through the breast, as well as both signal and noise propagation through the flat-panel imager, was developed to investigate these questions. Microcalcifications were simulated as calcium carbonate spheres with diameters set at the levels of 125, 150 and 175 ?m. Each cluster consisted of 10 spheres spread randomly in a 66 mm2 region of interest (ROI) and the detector pixel size was set to 100100, 200200, or 300300?m2. After reconstructing 100 projection sets for each case (half with signal present) with the cone-beam Feldkamp (FDK) algorithm, a localization receiver operating characteristic (LROC) study was conducted to evaluate the detectability of MCCs. Five observers chose the locations of cluster centers with correspondent confidence ratings. The average area under the LROC curve suggested that the 175 ?m MCCs can be detected at a high level of confidence. Results also indicate that flat-panel detectors with pixel size of 200200 ?m2 are appropriate for detecting small targets, such as MCCs.

  16. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  17. Combined collimator/reconstruction optimization for myocardial perfusion SPECT imaging using polar map-based LROC numerical observer

    NASA Astrophysics Data System (ADS)

    Konate, Souleymane; Pretorius, P. Hendrik; Gifford, Howard C.; O'Connor, J. Michael; Konik, Arda; Shazeeb, Mohammed Salman; King, Michael A.

    2012-02-01

    Polar maps have been used to assist clinicians diagnose coronary artery diseases (CAD) in single photon emission computed tomography (SPECT) myocardial perfusion imaging. Herein, we investigate the optimization of collimator design for perfusion defect detection in SPECT imaging when reconstruction includes modeling of the collimator. The optimization employs an LROC clinical model observer (CMO), which emulates the clinical task of polar map detection of CAD. By utilizing a CMO, which better mimics the clinical perfusion-defect detection task than previous SKE based observers, our objective is to optimize collimator design for SPECT myocardial perfusion imaging when reconstruction includes compensation for collimator spatial resolution. Comparison of lesion detection accuracy will then be employed to determine if a lower spatial resolution hence higher sensitivity collimator design than currently recommended could be utilized to reduce the radiation dose to the patient, imaging time, or a combination of both. As the first step in this investigation, we report herein on the optimization of the three-dimensional (3D) post-reconstruction Gaussian filtering of and the number of iterations used to reconstruct the SPECT slices of projections acquired by a low-energy generalpurpose (LEGP) collimator. The optimization was in terms of detection accuracy as determined by our CMO and four human observers. Both the human and all four CMO variants agreed that the optimal post-filtering was with sigma of the Gaussian in the range of 0.75 to 1.0 pixels. In terms of number of iterations, the human observers showed a preference for 5 iterations; however, only one of the variants of the CMO agreed with this selection. The others showed a preference for 15 iterations. We shall thus proceed to optimize the reconstruction parameters for even higher sensitivity collimators using this CMO, and then do the final comparison between collimators using their individually optimized parameters with human observers and three times the test images to reduce the statistical variation seen in our present results.

  18. Apollo 17 Landing Site: A Cartographic Investigation of the Taurus-Littrow Valley Based on LROC NAC Imagery

    NASA Astrophysics Data System (ADS)

    Haase, I.; Whlisch, M.; Glser, P.; Oberst, J.; Robinson, M. S.

    2014-04-01

    A Digital Terrain Model (DTM) of the Taurus- Littrow Valley with a 1.5 m/pixel resolution was derived from high resolution stereo images of the Lunar Reconnaissance Orbiter Narrow Angle Camera (LROC NAC) [1]. It was used to create a controlled LROC NAC ortho-mosaic with a pixel size of 0.5 m on the ground. Covering the entire Apollo 17 exploration site, it allows for determining accurate astronaut and surface feature positions along the astronauts' traverses when integrating historic Apollo surface photography to our analysis.

  19. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  20. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  1. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  2. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  3. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  4. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  5. Insights into Pyroclastic Volcanism on the Moon with LROC Data

    NASA Astrophysics Data System (ADS)

    Gaddis, L. R.; Robinson, M. S.; Hawke, B. R.; Giguere, T.; Gustafson, O.; Keszthelyi, L. P.; Lawrence, S.; Stopar, J.; Jolliff, B. L.; Bell, J. F.; Garry, W. B.

    2009-12-01

    Lunar pyroclastic deposits are high-priority targets for the Lunar Reconnaissance Orbiter Camera. Images from the Narrow Angle Camera (NAC; 0.5 m/pixel) and Wide Angle Camera (WAC; 7 bands, 100 m/p visible, 400 m/p ultraviolet) are being acquired. Studies of pyroclastic deposits with LRO data have the potential to resolve major questions concerning their distribution, composition, volume, eruptive styles, and role in early lunar volcanism. Analyses of LROC Commissioning and early Exploration Phase data focus on preliminary assessment of morphology and compositional variation among lunar pyroclastic deposits. At sites such as Rima Bode, Sulpicius Gallus, Aristarchus plateau, and Humorum, Alphonsus and Oppenheimer craters, LROC data are being used to search for evidence that may allow us to identify separate eruptive episodes from the same vent, pulses of magma intrusions and/or crustal dikes, and possible changes in composition and volatility of source materials with time. Preliminary observations of NAC data for possible pyroclastic vents reveal typically smooth, dark surfaces with variations in surface texture, roughness, and apparent albedo that may be related to differences in eruption mechanism and/or duration. Evidence of layering at some sites suggest low-volume eruptions or multiple events. Further analyses of LROC data will allow identification of intra-deposit compositional variations, possible juvenile components, and evaluation of the distributions and relative amounts of juvenile vs. host-rock components. Combined NAC and WAC data also will enable us to characterize spatial extents, distributions, and compositions of pyroclastic deposits and relate them to other sampled glass types and possibly to their associated basalts. WAC color data will be used to characterize titanium contents of pyroclastic deposits, to map the diversity of effusive and pyroclastic units with variable titanium contents that are currently not recognized, and to identify which pyroclastic deposits are the best sources of titanium and associated volatile elements. Using NAC stereo data, meter-scale topographic models of the surface will allow us to better constrain emplacement and distribution of possible juvenile materials, the geometry of small pyroclastic eruptions, and models of their eruption.

  6. LROC WAC Ultraviolet Reflectance of the Moon

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Denevi, B. W.; Sato, H.; Hapke, B. W.; Hawke, B. R.

    2011-10-01

    Earth-based color filter photography, first acquired in the 1960s, showed color differences related to morphologic boundaries on the Moon [1]. These color units were interpreted to indicate compositional differences, thought to be the result of variations in titanium content [1]. Later it was shown that iron abundance (FeO) also plays a dominant role in controlling color in lunar soils [2]. Equally important is the maturity of a lunar soil in terms of its reflectance properties (albedo and color) [3]. Maturity is a measure of the state of alteration of surface materials due to sputtering and high velocity micrometeorite impacts over time [3]. The Clementine (CL) spacecraft provided the first global and digital visible through infrared observations of the Moon [4]. This pioneering dataset allowed significant advances in our understanding of compositional (FeO and TiO2) and maturation differences across the Moon [5,6]. Later, the Lunar Prospector (LP) gamma ray and neutron experiments provided the first global, albeit low resolution, elemental maps [7]. Newly acquired Moon Mineralogic Mapper hyperspectral measurements are now providing the means to better characterize mineralogic variations on a global scale [8]. Our knowledge of ultraviolet color differences between geologic units is limited to low resolution (km scale) nearside telescopic observations, and high resolution Hubble Space Telescope images of three small areas [9], and laboratory analyses of lunar materials [10,11]. These previous studies detailed color differences in the UV (100 to 400 nm) related to composition and physical state. HST UV (250 nm) and visible (502 nm) color differences were found to correlate with TiO2, and were relatively insensitive to maturity effects seen in visible ratios (CL) [9]. These two results led to the conclusion that improvements in TiO2 estimation accuracy over existing methods may be possible through a simple UV/visible ratio [9]. The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) provides the first global lunar ultraviolet through visible (321 nm to 689 nm) multispectral observations [12]. The WAC is a sevencolor push-frame imager with nominal resolutions of 400 m (321, 360 nm) and 100 m (415, 566, 604, 643, 689 nm). Due to its wide field-of-view (60 in color mode) the phase angle within a single line varies 30, thus requiring the derivation of a precise photometric characterization [13] before any interpretations of lunar reflectance properties can be made. The current WAC photometric correction relies on multiple WAC observations of the same area over a broad range of phase angles and typically results in relative corrections good to a few percent [13].

  7. Height-to-diameter ratios of moon rocks from analysis of Lunokhod-1 and -2 and Apollo 11-17 panoramas and LROC NAC images

    NASA Astrophysics Data System (ADS)

    Demidov, N. E.; Basilevsky, A. T.

    2014-09-01

    An analysis is performed of 91 panoramic photographs taken by Lunokhod-1 and -2, 17 panoramic images composed of photographs taken by Apollo 11-15 astronauts, and six LROC NAC photographs. The results are used to measure the height-to-visible-diameter ( h/ d) and height-to-maximum-diameter ( h/ D) ratios for lunar rocks at three highland and three mare sites on the Moon. The average h/ d and h/ D for the six sites are found to be indistinguishable at a significance level of 95%. Therefore, our estimates for the average h/ d = 0.6 ± 0.03 and h/ D = 0.54 ± 0.03 on the basis of 445 rocks are applicable for the entire Moon's surface. Rounding off, an h/ D ratio of ≈0.5 is suggested for engineering models of the lunar surface. The ratios between the long, medium, and short axes of the lunar rocks are found to be similar to those obtained in high-velocity impact experiments for different materials. It is concluded, therefore, that the degree of penetration of the studied lunar rocks into the regolith is negligible, and micrometeorite abrasion and other factors do not dominate in the evolution of the shape of lunar rocks.

  8. Marius Hills: Surface Roughness from LROC and Mini-RF

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Hawke, B. R.; Bussey, B.; Stopar, J. D.; Denevi, B.; Robinson, M.; Tran, T.

    2010-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Team is collecting hundreds of high-resolution (0.5 m/pixel) Narrow Angle Camera (NAC) images of lunar volcanic constructs (domes, “cones”, and associated features) [1,2]. Marius Hills represents the largest concentration of volcanic features on the Moon and is a high-priority target for future exploration [3,4]. NAC images of this region provide new insights into the morphology and geology of specific features at the meter scale, including lava flow fronts, tectonic features, layers, and topography (using LROC stereo imagery) [2]. Here, we report initial results from Mini-RF and LROC collaborative studies of the Marius Hills. Mini-RF uses a hybrid polarimetric architecture to measure surface backscatter characteristics and can acquire data in one of two radar bands, S (12 cm) or X (4 cm) [5]. The spatial resolution of Mini-RF (15 m/pixel) enables correlation of features observed in NAC images to Mini-RF data. Mini-RF S-Band zoom-mode data and daughter products, such as circular polarization ratio (CPR), were directly compared to NAC images. Mini-RF S-Band radar images reveal enhanced radar backscatter associated with volcanic constructs in the Marius Hills region. Mini-RF data show that Marius Hills volcanic constructs have enhanced average CPR values (0.5-0.7) compared to the CPR values of the surrounding mare (~0.4). This result is consistent with the conclusions of [6], and implies that the lava flows comprising the domes in this region are blocky. To quantify the surface roughness [e.g., 6,7] block populations associated with specific geologic features in the Marius Hills region are being digitized from NAC images. Only blocks that can be unambiguously identified (>1 m diameter) are included in the digitization process, producing counts and size estimates of the block population. High block abundances occur mainly at the distal ends of lava flows. The average size of these blocks is 9 m, and 50% of observed blocks are between 9-12 m in diameter. These blocks are not associated with impact craters and have at most a thin layer of regolith. There is minimal visible evidence for downslope movement. Relatively high block abundances are also seen on the summits of steep-sided asymmetrical positive relief features (“cones”) atop low-sided domes. Digitization efforts will continue as we study the block populations of different geologic features in the Marius Hills region and correlate the results with Mini-RF data, which will provide new information about the emplacement of volcanic features in the region. [1] J.D. Stopar et al., LPI Contribution 1483 (2009) 93-94. [2] S.J. Lawrence et al. (2010) LPSC 41 #1906. [2] S.J. Lawrence et al. (2010) LPSC 41 # 2689. [3] C. Coombs & B.R. Hawke (1992) 2nd Proc. Lun. Bases & Space Act. 21st Cent pp. 219-229. [4]J.Gruener and B. Joosten (2009) LPI Contributions 1483 50-51. [5] D.B.J. Bussey et al. (2010) LPSC 41 # 2319. [6] B.A. Campbell et al. (2009) JGR-Planets, 114, 01001. [7] S.W. Anderson et al. (1998) GSA Bull, 110, 1258-1267.

  9. Multispectral image dissector camera system

    NASA Technical Reports Server (NTRS)

    Johnson, B. L.

    1972-01-01

    A sensor system which provides registered high-solution multispectral images from a single sensor with no mechanical moving parts is reported, and the operation of an image dissector camera (IDC) is described. An earth scene 100 nautical miles wide is imaged through a single lens onto a photocathode surface containing three spectral filters, thereby producing three separate spectral signatures on the photocathode surface. An electron image is formed, accelerated, focused, and electromagnetically, deflected across an image plane which contains three sampling apertures, behind which are located three electron multipliers. The IDC system uses electromagnetic deflection for cross-track scanning and spacecraft orbit motion for along-track scanning, thus eliminating the need for a mechanical scanning mirror.

  10. Uncertainty Analysis of LROC NAC Derived Elevation Models

    NASA Astrophysics Data System (ADS)

    Burns, K.; Yates, D. G.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) [1] is to gather stereo observations with the Narrow Angle Camera (NAC) to generate digital elevation models (DEMs). From an altitude of 50 km, the NAC acquires images with a pixel scale of 0.5 meters, and a dual NAC observation covers approximately 5 km cross-track by 25 km down-track. This low altitude was common from September 2009 to December 2011. Images acquired during the commissioning phase and those acquired from the fixed orbit (after 11 December 2011) have pixel scales that range from 0.35 meters at the south pole to 2 meters at the north pole. Alimetric observations obtained by the Lunar Orbiter Laser Altimeter (LOLA) provide measurements of ±0.1 m between the spacecraft and the surface [2]. However, uncertainties in the spacecraft positioning can result in offsets (±20m) between altimeter tracks over many orbits. The LROC team is currently developing a tool to automatically register alimetric observations to NAC DEMs [3]. Using a generalized pattern search (GPS) algorithm, the new automatic registration adjusts the spacecraft position and pointing information during times when NAC images, as well as LOLA measurements, of the same region are acquired to provide an absolute reference frame for the DEM. This information is then imported into SOCET SET to aide in creating controlled NAC DEMs. For every DEM, a figure of merit (FOM) map is generated using SOCET SET software. This is a valuable tool for determining the relative accuracy of a specific pixel in a DEM. Each pixel in a FOM map is given a value to determine its "quality" by determining if the specific pixel was shadowed, saturated, suspicious, interpolated/extrapolated, or successfully correlated. The overall quality of a NAC DEM is a function of both the absolute and relative accuracies. LOLA altimetry provides the most accurate absolute geodetic reference frame with which the NAC DEMs can be compared. Offsets between LOLA profiles and NAC DEMs are used to quantify the absolute accuracy. Small lateral movements in the LOLA points coupled with large changes in topography contribute to sizeable offsets between the datasets. The steep topography of Lichtenberg Crater provides an example of the offsets in the LOLA data. Ten tracks that cross the region of interest were used to calculate the offset with a root mean square (RMS) error of 9.67 m, an average error of 7.02 m, and a standard deviation of 9.61m. Large areas (>375 km sq) covered by a mosaic of NAC DEMs were compared to the Wide Angel Camera (WAC) derived Global Lunar DTM 100 m topographic model (GLD100) [4]. The GLD100 has a pixel scale of 100 m; therefore, the NAC DEMs were reduced to calculate the offsets between two datasets. When comparing NAC DEMs to WAC DEMs, it was determined that the vertical offsets were as follows [Site name (average offset in meters, standard deviation in meters)]: Lichtenberg Crater (-7.74, 20.49), Giordano Bruno (-5.31, 28.80), Hortensius Domes (-3.52, 16.00), and Reiner Gamma (-0.99,14.11). Resources: [1] Robinson et al. (2010) Space Sci. Rev. [2] Smith et al. (2010) Space Sci. Rev. [3]Speyerer et al. (2012) European Lunar Symp. [4] Scholten et al. (2012) JGR-Planets.

  11. Image dissector camera system study

    NASA Technical Reports Server (NTRS)

    Howell, L.

    1984-01-01

    Various aspects of a rendezvous and docking system using an image dissector detector as compared to a GaAs detector were discussed. Investigation into a gimbled scanning system is also covered and the measured video response curves from the image dissector camera are presented. Rendezvous will occur at ranges greater than 100 meters. The maximum range considered was 1000 meters. During docking, the range, range-rate, angle, and angle-rate to each reflector on the satellite must be measured. Docking range will be from 3 to 100 meters. The system consists of a CW laser diode transmitter and an image dissector receiver. The transmitter beam is amplitude modulated with three sine wave tones for ranging. The beam is coaxially combined with the receiver beam. Mechanical deflection of the transmitter beam, + or - 10 degrees in both X and Y, can be accomplished before or after it is combined with the receiver beam. The receiver will have a field-of-view (FOV) of 20 degrees and an instantaneous field-of-view (IFOV) of two milliradians (mrad) and will be electronically scanned in the image dissector. The increase in performance obtained from the GaAs photocathode is not needed to meet the present performance requirements.

  12. Morphology and Composition of Localized Lunar Dark Mantle Deposits With LROC Data

    NASA Astrophysics Data System (ADS)

    Gustafson, O.; Bell, J. F.; Gaddis, L. R.; Hawke, B. R.; Robinson, M. S.; LROC Science Team

    2010-12-01

    Clementine color (ultraviolet, visible or UVVIS) and Lunar Reconnaissance Orbiter (LRO) Wide Angle (WAC) and Narrow Angle (NAC) camera data provide the means to investigate localized lunar dark-mantle deposits (DMDs) of potential pyroclastic origin. Our goals are to (1) examine the morphology and physical characteristics of these deposits with LROC WAC and NAC data; (2) extend methods used in earlier studies of lunar DMDs with Clementine spectral reflectance (CSR) data; (3) use LRO WAC multispectral data to complement and extend the CSR data for compositional analyses; and (4) apply these results to identify the likely mode of emplacement and study the diversity of compositions among these deposits. Pyroclastic deposits have been recognized all across the Moon, identified by their low albedo, smooth texture, and mantling relationship to underlying features. Gaddis et al. (2003) presented a compositional analysis of 75 potential lunar pyroclastic deposits (LPDs) based on CSR measurements. New LRO camera (LROC) data permit more extensive analyses of such deposits than previously possible. Our study began with six sites on the southeastern limb of the Moon that contain nine of the cataloged 75 potential pyroclastic deposits: Humboldt (4 deposits), Petavius, Barnard, Abel B, Abel C, and Titius. Our analysis found that some of the DMDs exhibit qualities characteristic of fluid emplacement, such as flat surfaces, sharp margins, embaying relationships, and flow textures. We conclude that the localized DMDs are a complex class of features, many of which may have formed by a combination of effusive and pyroclastic emplacement mechanisms. We have extended this analysis to include additional localized DMDs from the catalog of 75 potential pyroclastic deposits. We have examined high resolution (up to 0.5 m/p) NAC images as they become available to assess the mode of emplacement of the deposits, locate potential volcanic vents, and assess physical characteristics of the DMDs such as thickness, roughness, and rock abundance. Within and around each DMD, the Clementine UVVIS multispectral mosaic (100 m/p, 5 bands at 415, 750, 900, 950, and 1000 nm) and LROC WAC multispectral image cubes (75 to 400 m/p, 7 bands at 320, 360, 415, 565, 605, 645, and 690 nm) have been used to extract spectral reflectance data. Spectral ratio plots were prepared to compare deposits and draw conclusions regarding compositional differences, such as mafic mineral or titanium content and distribution, both within and between DMDs. The result of the study will be an improved classification of these deposits in terms of emplacement mechanisms and composition, including identifying compositional affinities among DMDs and between DMDs and other volcanic deposits.

  13. On an assessment of surface roughness estimates from lunar laser altimetry pulse-widths for the Moon from LOLA using LROC narrow-angle stereo DTMs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Poole, William

    2013-04-01

    Neumann et al. [1] proposed that laser altimetry pulse-widths could be employed to derive "within-footprint" surface roughness as opposed to surface roughness estimated from between laser altimetry pierce-points such as the example for Mars [2] and more recently from the 4-pointed star-shaped LOLA (Lunar reconnaissance Orbiter Laser Altimeter) onboard the NASA-LRO [3]. Since 2009, the LOLA has been collecting extensive global laser altimetry data with a 5m footprint and ?25m between the 5 points in a star-shape. In order to assess how accurately surface roughness (defined as simple RMS after slope correction) derived from LROC matches with surface roughness derived from LOLA footprints, publicly released LROC-NA (LRO Camera Narrow Angle) 1m Digital Terrain Models (DTMs) were employed to measure the surface roughness directly within each 5m footprint. A set of 20 LROC-NA DTMs were examined. Initially the match-up between the LOLA and LROC-NA orthorectified images (ORIs) is assessed visually to ensure that the co-registration is better than the LOLA footprint resolution. For each LOLA footprint, the pulse-width geolocation is then retrieved and this is used to "cookie-cut" the surface roughness and slopes derived from the LROC-NA DTMs. The investigation which includes data from a variety of different landforms shows little, if any correlation between surface roughness estimated from DTMs with LOLA pulse-widths at sub-footprint scale. In fact there is only any perceptible correlation between LOLA and LROC-DTMs at baselines of 40-60m for surface roughness and 20m for slopes. [1] Neumann et al. Mars Orbiter Laser Altimeter pulse width measurements and footprint-scale roughness. Geophysical Research Letters (2003) vol. 30 (11), paper 1561. DOI: 10.1029/2003GL017048 [2] Kreslavsky and Head. Kilometer-scale roughness of Mars: results from MOLA data analysis. J Geophys Res (2000) vol. 105 (E11) pp. 26695-26711. [3] Rosenburg et al. Global surface slopes and roughness of the Moon from the Lunar Orbiter Laser Altimeter. Journal of Geophysical Research (2011) vol. 116, paper E02001. DOI: 10.1029/2010JE003716 [4] Chin et al. Lunar Reconnaissance Orbiter Overview: The Instrument Suite and Mission. Space Science Reviews (2007) vol. 129 (4) pp. 391-419

  14. AIM: Ames Imaging Module Spacecraft Camera

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  15. LROC NAC Digital Elevation Model of Gruithuisen Gamma

    NASA Astrophysics Data System (ADS)

    Braden, S.; Tran, T. N.; Robinson, M. S.

    2009-12-01

    The Gruithuisen Domes have long been of interest as examples of non-mare volcanism [1]. Their form suggests extrusion of silica-rich magmas, possibly dating to 3.7-3.85 Ga (around the same time as the Iridum event), and were subsequently embayed by mare [2,3]. Non-mare volcanism is indicated by spectral features known as red spots which have (a) high albedo, (b) strong absorption in the ultraviolet, and (c) a wide range of morphologies [4,5,6]. The composition of red spot domes is still unknown, but dacitic or rhyolitic KREEP-rich compositions [5] and mature, low iron and low titanium agglutinate-rich soils [7] have been suggested. The existence of non-mare volcanism has major implications for the thermal history and crustal evolution of the Moon. A new digital elevation model (DEM), derived from stereo image pairs acquired with the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC), allows detailed investigation of the morphology and thus origin of Mons Gruithuisen Gamma (36.6 N, 40.5 W). The 10 meter per pixel DEM shows relief of ~1500 meters from the summit plateau of Gruithuisen Gamma to the nearby mare surface. This measurement is close to previous estimates of over 1200 meters from Apollo era images [4]. Previous estimates also suggested that the overall slopes ranged from 15-30 [7]. Radial profiles (n=25) across the eastern two-thirds of the Gruithuisen Gamma DEM show that the overall slope is 17-18 along the north- and northeastern-facing slopes, 14 along the eastern-most edge, 12 on the side facing the contact of the dome material and highlands material, and 11 on the directly southern-facing slope. The north-south diameter of the dome is ~24 km and the east-west diameter is ~18 km. The textures on each slope are remarkably similar and distinct from the highlands and crater slopes, with irregular furrows oriented down-slope. The same furrowed texture is not seen on mare domes, which are generally much smoother, flatter, and smaller than red spot domes [8]. Two ~2 km diameter craters on Gamma have likely exposed fresh dome material from below the surface texture, as evidenced by boulders visible in the ejecta. Overall, Gruithuisen Gamma has asymmetric slope morphology, but uniform texture. Topographic analysis and models of rheological properties with data from new LROC DEMs may aid in constraining the composition and origin of Gruithuisen Gamma. [1] Scott and Eggleton (1973) I-805, USGS. [2] Wagner, R.J., et al. (2002) LPSC #1619 [3] Wagner, R.J., et al. (2002) JGR. 104. [4] Chevrel, S.D., Pinet, P.C., and Head J.W. (1999) JGR. 104, 16515-16529 [5] Malin, M. (1974) Earth Planet. Sci. Lett. 21, 331 [6] Whitaker, E.A. (1972) Moon, 4, 348. [7] Head, J.W. and McCord, T.B. (1978) Science. 199, 1433-1436 [8] Head, J.W. and Gifford, A. (1980) Moon and Planets.

  16. Exploring the Moon with the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Speyerer, E. J.; Boyd, A.; Waller, D.; Wagner, R. V.; Burns, K. N.

    2012-08-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of three imaging systems: a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). Since entering lunar orbit in June of 2009, LROC has collected over 700,000 images. A subset of WAC images were reduced into a global morphologic basemap, a near-global digital elevation model, and multitemporal movie sequences that characterize illumination conditions of the polar regions. In addition, NAC observations were reduced to meter scale maps and digital elevation models of select regions of interest. These Reduced Data Record (RDR) products were publicly released through NASA's Planetary Data System to aid scientists and engineers in planning future lunar missions and addressing key science questions.

  17. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  18. LRO Camera Imaging of Potential Landing Sites in the South Pole-Aitken Basin

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.; Wiseman, S. M.; Gibson, K. E.; Lauber, C.; Robinson, M.; Gaddis, L. R.; Scholten, F.; Oberst, J.; LROC Science; Operations Team

    2010-12-01

    We show results of WAC (Wide Angle Camera) and NAC (Narrow Angle Camera) imaging of candidate landing sites within the South Pole-Aitken (SPA) basin of the Moon obtained by the Lunar Reconnaissance Orbiter during the first full year of operation. These images enable a greatly improved delineation of geologic units, determination of unit thicknesses and stratigraphy, and detailed surface characterization that has not been possible with previous data. WAC imaging encompasses the entire SPA basin, located within an area ranging from ~ 130-250 degrees east longitude and ~15 degrees south latitude to the South Pole, at different incidence angles, with the specific range of incidence dependent on latitude. The WAC images show morphology and surface detail at better than 100 m per pixel, with spatial coverage and quality unmatched by previous data sets. NAC images reveal details at the sub-meter pixel scale that enable new ways to evaluate the origins and stratigraphy of deposits. Key among new results is the capability to discern extents of ancient volcanic deposits that are covered by later crater ejecta (cryptomare) [see Petro et al., this conference] using new, complementary color data from Kaguya and Chandrayaan-1. Digital topographic models derived from WAC and NAC geometric stereo coverage show broad intercrater-plains areas where slopes are acceptably low for high-probability safe landing [see Archinal et al., this conference]. NAC images allow mapping and measurement of small, fresh craters that excavated boulders and thus provide information on surface roughness and depth to bedrock beneath regolith and plains deposits. We use these data to estimate deposit thickness in areas of interest for landing and potential sample collection to better understand the possible provenance of samples. Also, small regions marked by fresh impact craters and their associated boulder fields are readily identified by their bright ejecta patterns and marked as lander keep-out zones. We will show examples of LROC data including those for Constellation sites on the SPA rim and interior, a site between Bose and Alder Craters, sites east of Bhabha Crater, and sites on and near the “Mafic Mound” [see Pieters et al., this conference]. Together the LROC data and complementary products provide essential information for ensuring identification of safe landing and sampling sites within SPA basin that has never before been available for a planetary mission.

  19. Single-Camera Panoramic-Imaging Systems

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L.; Gilbert, John

    2007-01-01

    Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

  20. Lroc Observations of Permanently Shadowed Regions: Seeing into the Dark

    NASA Astrophysics Data System (ADS)

    Koeber, S. D.; Robinson, M. S.

    2013-12-01

    Permanently shadowed regions (PSRs) near the lunar poles that receive secondary illumination from nearby Sun facing slopes were imaged by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NAC). Typically secondary lighting is optimal in polar areas around respective solstices and when the LRO orbit is nearly coincident with the sub-solar point (low spacecraft beta angles). NAC PSR images provide the means to search for evidence of surface frosts and unusual morphologies from ice rich regolith, and aid in planning potential landing sites for future in-situ exploration. Secondary illumination imaging in PSRs requires NAC integration times typically more than ten times greater than nominal imaging. The increased exposure time results in downtrack smear that decreases the spatial resolution of the NAC PSR images. Most long exposure NAC images of PSRs were acquired with exposure times of 24.2-ms (1-m by 40-m pixels, sampled to 20-m) and 12-ms (1-m by 20-m, sampled to 10-m). The initial campaign to acquire long exposure NAC images of PSRs in the north pole region ran from February 2013 to April 2013. Relative to the south polar region, PSRs near the north pole are generally smaller (D<24-km) and located in simple craters. Long exposure NAC images of PSRs in simple craters are often well illuminated by secondary light reflected from Sun-facing crater slopes during the northern summer solstice, allowing many PSRs to be imaged with the shorter exposure time of 12-ms (resampled to 10-m). With the exception of some craters in Peary crater, most northern PSRs with diameters >6-km were successfully imaged (ex. Whipple, Hermite A, and Rozhestvenskiy U). The third PSR south polar campaign began in April 2013 and will continue until October 2013. The third campaign will expand previous NAC coverage of PSRs and follow up on discoveries with new images of higher signal to noise ratio (SNR), higher resolution, and varying secondary illumination conditions. Utilizing previous campaign images and Sun's position, an individualized approach for targeting each crater drives this campaign. Secondary lighting within the PSRs, though somewhat diffuse, is at low incidence angles and coupled with nadir NAC imaging results in large phase angles. Such conditions tend to reduce albedo contrasts, complicating identification of patchy frost or ice deposits. Within the long exposure PSR images, a few small craters (D<200-m) with highly reflective ejecta blankets have been identified and interpreted as small fresh impact craters. Sylvester N and Main L are Copernican-age craters with PSRs; NAC images reveal debris flows, boulders, and morphologically fresh interior walls indicative of their young age. The identifications of albedo anomalies associated with these fresh craters and debris flows indicate that strong albedo contrasts (~2x) associated with small fresh impact craters can be distinguished in PSRs. Lunar highland material has an albedo of ~0.2, while pure water frost has an albedo of ~0.9. If features in PSRs have an albedo similar to lunar highlands, significant surface frost deposits could result in detectable reflective anomalies in the NAC images. However, no reflective anomalies have thus far been identified in PSRs attributable to frost.

  1. Polarized light imaging with a handheld camera

    NASA Astrophysics Data System (ADS)

    Ramella-Roman, Jessica C.; Lee, Kenneth; Prahl, Scott A.; Jacques, Steven L.

    2003-10-01

    Polarized light imaging can facilitate clinical mapping of skin cancer margins and can potentially guide clinical excision. A real-time hand-held polarized-light system was built to image skin lesions in the clinic. The system consisted of two 8-bit CCD cameras (Camera 1 and Camera 2) mounted on the camera assembly and illuminated the patient"s skin. Light was polarized parallel to the source-patient-camera plane. The light, reflected from the patient, was collected with an objective lens mounted on the beam splitter and divided into a horizontal (H) and vertical (V) component. The H component was collected by Camera 1, and the V component was collected by Camera 2. A new image was generated based on the polarization ratio (H - V)(H + V) and displayed. This image was sensitive to the superficial skin layer and some early clinical examples are presented. A web version of this paper is available at the following web site: optics.sgu.ru/SFM/2002/internet/Jessica/.

  2. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  3. Development of gamma ray imaging cameras

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera's orientation, while the brightness and color'' would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project's two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R D efforts for the third year effort. 8 refs.

  4. Occluded object imaging via optimal camera selection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Tong, Xiaomin; Ma, Wenguang; Yu, Rui

    2013-12-01

    High performance occluded object imaging in cluttered scenes is a significant challenging task for many computer vision applications. Recently the camera array synthetic aperture imaging is proved to be an effective way to seeing object through occlusion. However, the imaging quality of occluded object is often significantly decreased by the shadows of the foreground occluder. Although some works have been presented to label the foreground occluder via object segmentation or 3D reconstruction, these methods will fail in the case of complicated occluder and severe occlusion. In this paper, we present a novel optimal camera selection algorithm to solve the above problem. The main characteristics of this algorithm include: (1) Instead of synthetic aperture imaging, we formulate the occluded object imaging problem as an optimal camera selection and mosaicking problem. To the best of our knowledge, our proposed method is the first one for occluded object mosaicing. (2) A greedy optimization framework is presented to propagate the visibility information among various depth focus planes. (3) A multiple label energy minimization formulation is designed in each plane to select the optimal camera. The energy is estimated in the synthetic aperture image volume and integrates the multi-view intensity consistency, previous visibility property and camera view smoothness, which is minimized via Graph cuts. We compare our method with the state-of-the-art synthetic aperture imaging algorithms, and extensive experimental results with qualitative and quantitative analysis demonstrate the effectiveness and superiority of our approach.

  5. Images obtained with a compact gamma camera

    NASA Astrophysics Data System (ADS)

    Bird, A. J.; Ramsden, D.

    1990-12-01

    A design for a compact gamma camera based on the use of a position-sensitive photomultiplier is presented. Tests have been carried out on a prototype detector system, having a sensitive area of 25 cm 2, using both a simple pinhole aperture and a parallel collimator. Images of a thyroid phantom are presented, and after processing to reduce the artefacts introduced by the use of a pinhole aperture, the quality is compared with that obtained using a standard Anger camera.

  6. Classroom multispectral imaging using inexpensive digital cameras.

    NASA Astrophysics Data System (ADS)

    Fortes, A. D.

    2007-12-01

    The proliferation of increasingly cheap digital cameras in recent years means that it has become easier to exploit the broad wavelength sensitivity of their CCDs (360 - 1100 nm) for classroom-based teaching. With the right tools, it is possible to open children's eyes to the invisible world of UVA and near-IR radiation either side of our narrow visual band. The camera-filter combinations I describe can be used to explore the world of animal vision, looking for invisible markings on flowers, or in bird plumage, for example. In combination with a basic spectroscope (such as the Project-STAR handheld plastic spectrometer, 25), it is possible to investigate the range of human vision and camera sensitivity, and to explore the atomic and molecular absorption lines from the solar and terrestrial atmospheres. My principal use of the cameras has been to teach multispectral imaging of the kind used to determine remotely the composition of planetary surfaces. A range of camera options, from 50 circuit-board mounted CCDs up to $900 semi-pro infrared camera kits (including mobile phones along the way), and various UV-vis-IR filter options will be presented. Examples of multispectral images taken with these systems are used to illustrate the range of classroom topics that can be covered. Particular attention is given to learning about spectral reflectance curves and comparing images from Earth and Mars taken using the same filter combination that it used on the Mars Rovers.

  7. Multiple-image oscilloscope camera

    DOEpatents

    Yasillo, Nicholas J.

    1978-01-01

    An optical device for placing automatically a plurality of images at selected locations on one film comprises a stepping motor coupled to a rotating mirror and lens. A mechanical connection from the mirror controls an electronic logical system to allow rotation of the mirror to place a focused image at the desired preselected location. The device is of especial utility when used to place four images on a single film to record oscilloscope views obtained in gamma radiography.

  8. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  9. Investigation of Layered Lunar Mare Lava flows through LROC Imagery and Terrestrial Analogs

    NASA Astrophysics Data System (ADS)

    Needham, H.; Rumpf, M.; Sarah, F.

    2013-12-01

    High resolution images of the lunar surface have revealed layered deposits in the walls of impact craters and pit craters in the lunar maria, which are interpreted to be sequences of stacked lava flows. The goal of our research is to establish quantitative constraints and uncertainties on the thicknesses of individual flow units comprising the layered outcrops, in order to model the cooling history of lunar lava flows. The underlying motivation for this project is to identify locations hosting intercalated units of lava flows and paleoregoliths, which may preserve snapshots of the ancient solar wind and other extra-lunar particles, thereby providing potential sampling localities for future missions to the lunar surface. Our approach involves mapping layered outcrops using high-resolution imagery acquired by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC), with constraints on flow unit dimensions provided by Lunar Orbiter Laser Altimeter (LOLA) data. We have measured thicknesses of ~ 2 to > 20 m. However, there is considerable uncertainty in the definition of contacts between adjacent units, primarily because talus commonly obscures contacts and/or prevents lateral tracing of the flow units. In addition, flows may have thicknesses or geomorphological complexity at scales approaching the limit of resolution of the data, which hampers distinguishing one unit from another. To address these issues, we have undertaken a terrestrial analog study using World View 2 satellite imagery of layered lava sequences on Oahu, Hawaii. These data have a resolution comparable to LROC NAC images of 0.5 m. The layered lava sequences are first analyzed in ArcGIS to obtain an initial estimate of the number and thicknesses of flow units identified in the images. We next visit the outcrops in the field to perform detailed measurements of the individual units. We have discovered that the number of flow units identified in the remote sensing data is fewer compared to the field analysis, because the resolution of the data precludes identification of subtle flow contacts and the identified 'units' are in fact multiple compounded units. Other factors such as vegetation and shadows may alter the view in the satellite imagery. This means that clarity in the lunar study may also be affected by factors such as lighting angle and amount of debris overlaying the lava sequence. The compilation of field and remote sensing measurements allows us to determine the uncertainty on unit thicknesses, which can be modeled to establish the uncertainty on the calculated depths of penetration of the resulting heat pulse into the underlying regolith. This in turn provides insight into the survivability of extra-lunar particles in paleoregolith layers sandwiched between lava flows.

  10. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  11. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  12. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  13. Camera for High-Speed THz Imaging

    NASA Astrophysics Data System (ADS)

    Zdanevi?ius, Justinas; Bauer, Maris; Boppel, Sebastian; Palenskis, Vilius; Lisauskas, Alvydas; Krozer, Viktor; Roskos, Hartmut G.

    2015-10-01

    We present a 24 24 pixel camera capable of high-speed THz imaging in power-detection mode. Each pixel of the sensor array consists of a pair of 150-nm NMOS transistors coupled to a patch antenna with resonance at 600 GHz. The camera can operate with a speed of up to 450 frames per second where it exhibits a minimum resolvable power of 10.5 nW per pixel. For a 30-Hz frame rate, the minimum resolvable power is 1.4 nW.

  14. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  15. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  16. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  17. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  18. Imaging spectrometer/camera having convex grating

    NASA Technical Reports Server (NTRS)

    Reininger, Francis M. (Inventor)

    2000-01-01

    An imaging spectrometer has fore-optics coupled to a spectral resolving system with an entrance slit extending in a first direction at an imaging location of the fore-optics for receiving the image, a convex diffraction grating for separating the image into a plurality of spectra of predetermined wavelength ranges; a spectrometer array for detecting the spectra; and at least one concave sperical mirror concentric with the diffraction grating for relaying the image from the entrance slit to the diffraction grating and from the diffraction grating to the spectrometer array. In one embodiment, the spectrometer is configured in a lateral mode in which the entrance slit and the spectrometer array are displaced laterally on opposite sides of the diffraction grating in a second direction substantially perpendicular to the first direction. In another embodiment, the spectrometer is combined with a polychromatic imaging camera array disposed adjacent said entrance slit for recording said image.

  19. Crack Detection from Moving Camera Images

    NASA Astrophysics Data System (ADS)

    Maehara, Hideaki; Takiguchi, Jun'ichi; Nishikawa, Keiichi

    We aim to develop the method that automatically detects the cracks from the images that were taken from the concrete inner surface in tunnels, etc. Assuming that the images are taken from the mobile mapping systems, we have considered the method utilizing the crack's projection model on the projection plane of moving cameras. By the prototype, we tried to process the 2mm/pixel images in order to detect the 0.3mm-wide cracks, which have registered as the real crack by the human inspectors. As the result, most of the cracks were detected.

  20. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Thermal imaging camera reporting. 743... REPORTING AND NOTIFICATION § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to...

  1. The Widespread Distribution of Swirls in Lunar Reconnaissance Orbiter Camera Images

    NASA Astrophysics Data System (ADS)

    Denevi, B. W.; Robinson, M. S.; Boyd, A. K.; Blewett, D. T.

    2015-10-01

    Lunar swirls, the sinuous high-and low-reflectance features that cannot be mentioned without the associated adjective "enigmatic,"are of interest because of their link to crustal magnetic anomalies [1,2]. These localized magnetic anomalies create mini-magnetospheres [3,4] and may alter the typical surface modification processes or result in altogether distinct processes that form the swirls. One hypothesis is that magnetic anomalies may provide some degree of shielding from the solar wind [1,2], which could impede space weathering due to solar wind sputtering. In this case, swirls would serve as a way to compare areas affected by typical lunar space weathering (solar wind plus micrometeoroid bombardment) to those where space weathering is dominated by micrometeoroid bombardment alone, providing a natural means to assess the relative contributions of these two processes to the alteration of fresh regolith. Alternately,magnetic anomalies may play a role in the sorting of soil grains, such that the high-reflectance portion of swirls may preferentially accumulate feldspar-rich dust [5]or soils with a lower component of nanophase iron [6].Each of these scenarios presumes a pre-existing magnetic anomaly; swirlshave also been suggested to be the result of recent cometary impacts in which the remanent magnetic field is generated by the impact event[7].Here we map the distribution of swirls using ultraviolet and visible images from the Lunar Reconnaissance Orbiter Camera(LROC) Wide Angle Camera (WAC) [8,9]. We explore the relationship of the swirls to crustal magnetic anomalies[10], and examine regions with magnetic anomalies and no swirls.

  2. The registration of star image in multiple cameras

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Li, Yingchun; Zhang, Tinghua; Du, Lin

    2015-10-01

    As the commercial performance of camera sensor and the imaging quality of lens improving, it has the possibility to applicate in the space target observation. Multiple cameras can further improve the detection ability of the camera with image fusion. This paper mainly studies the multiple camera image fusion problem of registration with the imaging characteristics of a commercial camera, and then put forward an applicable method of star image registration. It proved that the accuracy of registration could reach the subpixel level with experiments.

  3. Enhancement of document images from cameras

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Dance, Christopher R.

    1998-04-01

    As digital cameras become cheaper and more powerful, driven by the consumer digital photography market, we anticipate significant value in extending their utility as a general office peripheral by adding a paper scanning capability. The main technical challenges in realizing this new scanning interface are insufficient resolution, blur and lighting variations. We have developed an efficient technique for the recovery of text from digital camera images, which simultaneously treats these three problems, unlike other local thresholding algorithms which do not cope with blur and resolution enhancement. The technique first performs deblurring by deconvolution, and then resolution enhancement by linear interpolation. We compare the performance of a threshold derived from the local mean and variance of all pixel values within a neighborhood with a threshold derived from the local mean of just those pixels with high gradient. We assess performance using OCR error scores.

  4. Cervical SPECT Camera for Parathyroid Imaging

    SciTech Connect

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called “Parathyroidectomy”. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  5. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  6. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  7. Speckle Camera Imaging of the Planet Pluto

    NASA Astrophysics Data System (ADS)

    Howell, Steve B.; Horch, Elliott P.; Everett, Mark E.; Ciardi, David R.

    2012-10-01

    We have obtained optical wavelength (692 nm and 880 nm) speckle imaging of the planet Pluto and its largest moon Charon. Using our DSSI speckle camera attached to the Gemini North 8 m telescope, we collected high resolution imaging with an angular resolution of ~20 mas, a value at the Gemini-N telescope diffraction limit. We have produced for this binary system the first speckle reconstructed images, from which we can measure not only the orbital separation and position angle for Charon, but also the diameters of the two bodies. Our measurements of these parameters agree, within the uncertainties, with the current best values for Pluto and Charon. The Gemini-N speckle observations of Pluto are presented to illustrate the capabilities of our instrument and the robust production of high accuracy, high spatial resolution reconstructed images. We hope our results will suggest additional applications of high resolution speckle imaging for other objects within our solar system and beyond. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministrio da Cincia, Tecnologia e Inovao (Brazil) and Ministerio de Ciencia, Tecnologa e Innovacin Productiva (Argentina).

  8. Camera system for multispectral imaging of documents

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Boydston, Kenneth; France, Fenella G.; Knox, Keith T.; Easton, Roger L., Jr.; Toth, Michael B.

    2009-02-01

    A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.

  9. Update on High-Resolution Geodetically Controlled LROC Polar Mosaics

    NASA Astrophysics Data System (ADS)

    Archinal, B.; Lee, E.; Weller, L.; Richie, J.; Edmundson, K.; Laura, J.; Robinson, M.; Speyerer, E.; Boyd, A.; Bowman-Cisneros, E.; Wagner, R.; Nefian, A.

    2015-10-01

    We describe progress on high-resolution (1 m/pixel) geodetically controlled LROC mosaics of the lunar poles, which can be used for locating illumination resources (for solar power or cold traps) or landing site and surface operations planning.

  10. Fast Camera Imaging of Hall Thruster Ignition

    SciTech Connect

    C.L. Ellison, Y. Raitses and N.J. Fisch

    2011-02-24

    Hall thrusters provide efficient space propulsion by electrostatic acceleration of ions. Rotating electron clouds in the thruster overcome the space charge limitations of other methods. Images of the thruster startup, taken with a fast camera, reveal a bright ionization period which settles into steady state operation over 50 ?s. The cathode introduces azimuthal asymmetry, which persists for about 30 ?s into the ignition. Plasma thrusters are used on satellites for repositioning, orbit correction and drag compensation. The advantage of plasma thrusters over conventional chemical thrusters is that the exhaust energies are not limited by chemical energy to about an electron volt. For xenon Hall thrusters, the ion exhaust velocity can be 15-20 km/s, compared to 5 km/s for a typical chemical thruster

  11. New insight into lunar impact melt mobility from the LRO camera

    USGS Publications Warehouse

    Bray, Veronica J.; Tornabene, Livio L.; Keszthelyi, Laszlo P.; McEwen, Alfred S.; Hawke, B. Ray; Giguere, Thomas A.; Kattenhorn, Simon A.; Garry, William B.; Rizk, Bashar; Caudill, C.M.; Gaddis, Lisa R.; van der Bogert, Carolyn H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is systematically imaging impact melt deposits in and around lunar craters at meter and sub-meter scales. These images reveal that lunar impact melts, although morphologically similar to terrestrial lava flows of similar size, exhibit distinctive features (e.g., erosional channels). Although generated in a single rapid event, the post-impact mobility and morphology of lunar impact melts is surprisingly complex. We present evidence for multi-stage influx of impact melt into flow lobes and crater floor ponds. Our volume and cooling time estimates for the post-emplacement melt movements noted in LROC images suggest that new flows can emerge from melt ponds an extended time period after the impact event.

  12. Analysis and compression of plenoptic camera images with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Birch, Gabriel C.; Tyo, J. Scott

    2012-10-01

    Capturing light field data with a plenoptic camera has been discussed extensively in the literature. However, recent improvements in digital imaging have made demonstration and commercialization of plenoptic cameras feasible. The raw images obtained with plenoptic cameras consist of an array of small circular images, each of which capture local spatial and trajectory information regarding the light rays incident on that point. Here, we seek to develop techniques for representing such images with a natural set of basis functions. In doing so, reconstruction of slices through the light field data, as well as image compression can be easily achieved.

  13. Measurement of the nonuniformity of first responder thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Lock, Andrew; Amon, Francine

    2008-04-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating the nonuniformity of thermal imaging cameras. Several commercially available uncooled focal plane array cameras were examined. Because of proprietary property issues, each camera was considered a 'black box'. In these experiments, an extended area black body (18 cm square) was placed very close to the objective lens of the thermal imaging camera. The resultant video output from the camera was digitized at a resolution of 640x480 pixels and a grayscale depth of 10 bits. The nonuniformity was calculated using the standard deviation of the digitized image pixel intensities divided by the mean of those pixel intensities. This procedure was repeated for each camera at several blackbody temperatures in the range from 30 C to 260 C. It has observed that the nonuniformity initially increases with temperature, then asymptotically approaches a maximum value. Nonuniformity is also applied to the calculation of Spatial Frequency Response as well providing a noise floor. The testing procedures described herein are being developed as part of a suite of tests to be incorporated into a performance standard covering thermal imaging cameras for first responders.

  14. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  15. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  16. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  17. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  18. Thermal analysis of the ultraviolet imager camera and electronics

    NASA Technical Reports Server (NTRS)

    Dirks, Gregory J.

    1991-01-01

    The Ultraviolet Imaging experiment has undergone design changes that necessiate updating the reduced thermal models (RTM's) for both the Camera and Electronics. In addition, there are several mission scenarios that need to be evaluated in terms of thermal response of the instruments. The impact of these design changes and mission scenarios on the thermal performance of the Camera and Electronics assemblies is discussed.

  19. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon

  20. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  1. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  2. Mobile phone camera benchmarking: combination of camera speed and image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  3. Film reciprocity law failure in scintillation camera imaging

    SciTech Connect

    Grossman, L.W.; Van Tuinen, R.J.; Kruger, J.B.; Scholz, K.L.

    1981-03-01

    Sensitometric measurements of scintillation camera images show that substantial changes in film response occur with variations in imaging time or dot focus. Both the speed and the slope of the film characteristic curve are affected. The phenomenon responsible for the variation in response is referred to as film reciprocity law failure, and is inherent in the image-forming process.

  4. Application of the CCD camera in medical imaging

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Kom; Smith, Chuck; Bunting, Ralph; Knoll, Paul; Wobig, Randy; Thacker, Rod

    1999-04-01

    Medical fluoroscopy is a set of radiological procedures used in medical imaging for functional and dynamic studies of digestive system. Major components in the imaging chain include image intensifier that converts x-ray information into an intensity pattern on its output screen and a CCTV camera that converts the output screen intensity pattern into video information to be displayed on a TV monitor. To properly respond to such a wide dynamic range on a real-time basis, such as fluoroscopy procedure, are very challenging. Also, similar to all other medical imaging studies, detail resolution is of great importance. Without proper contrast, spatial resolution is compromised. The many inherent advantages of CCD make it a suitable choice for dynamic studies. Recently, CCD camera are introduced as the camera of choice for medical fluoroscopy imaging system. The objective of our project was to investigate a newly installed CCD fluoroscopy system in areas of contrast resolution, details, and radiation dose.

  5. Autofluorescence imaging of basal cell carcinoma by smartphone RGB camera.

    PubMed

    Lihachev, Alexey; Derjabo, Alexander; Ferulova, Inesa; Lange, Marta; Lihacova, Ilze; Spigulis, Janis

    2015-12-01

    The feasibility of smartphones for in vivo skin autofluorescence imaging has been investigated. Filtered autofluorescence images from the same tissue area were periodically captured by a smartphone RGB camera with subsequent detection of fluorescence intensity decreasing at each image pixel for further imaging the planar distribution of those values. The proposed methodology was tested clinically with 13 basal cell carcinoma and 1 atypical nevus. Several clinical cases and potential future applications of the smartphone-based technique are discussed. PMID:26662298

  6. Autofluorescence imaging of basal cell carcinoma by smartphone RGB camera

    NASA Astrophysics Data System (ADS)

    Lihachev, Alexey; Derjabo, Alexander; Ferulova, Inesa; Lange, Marta; Lihacova, Ilze; Spigulis, Janis

    2015-12-01

    The feasibility of smartphones for in vivo skin autofluorescence imaging has been investigated. Filtered autofluorescence images from the same tissue area were periodically captured by a smartphone RGB camera with subsequent detection of fluorescence intensity decreasing at each image pixel for further imaging the planar distribution of those values. The proposed methodology was tested clinically with 13 basal cell carcinoma and 1 atypical nevus. Several clinical cases and potential future applications of the smartphone-based technique are discussed.

  7. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  8. Acquisition and evaluation of radiography images by digital camera.

    PubMed

    Cone, Stephen W; Carucci, Laura R; Yu, Jinxing; Rafiq, Azhar; Doarn, Charles R; Merrell, Ronald C

    2005-04-01

    To determine applicability of low-cost digital imaging for different radiographic modalities used in consultations from remote areas of the Ecuadorian rainforest with limited resources, both medical and financial. Low-cost digital imaging, consisting of hand-held digital cameras, was used for image capture at a remote location. Diagnostic radiographic images were captured in Ecuador by digital camera and transmitted to a password-protected File Transfer Protocol (FTP) server at VCU Medical Center in Richmond, Virginia, using standard Internet connectivity with standard security. After capture and subsequent transfer of images via low-bandwidth Internet connections, attending radiologists in the United States compared diagnoses to those from Ecuador to evaluate quality of image transfer. Corroborative diagnoses were obtained with the digital camera images for greater than 90% of the plain film and computed tomography studies. Ultrasound (U/S) studies demonstrated only 56% corroboration. Images of radiographs captured utilizing commercially available digital cameras can provide quality sufficient for expert consultation for many plain film studies for remote, underserved areas without access to advanced modalities. PMID:15857253

  9. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    SciTech Connect

    Ralph James

    2009-10-27

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  10. ProxiScan?: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2010-01-08

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  11. An airborne four-camera imaging system for agricultural applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  12. The algorithm for generation of panoramic images for omnidirectional cameras

    NASA Astrophysics Data System (ADS)

    Lazarenko, Vasiliy P.; Yarishev, Sergey; Korotaev, Valeriy

    2015-05-01

    The omnidirectional cameras are used in areas where large field-of-view is important. Omnidirectional cameras can give a complete view of 360° along one of direction. But the distortion of omnidirectional cameras is great, which makes omnidirectional image unreadable. One way to view omnidirectional images in a readable form is the generation of panoramic images from omnidirectional images. At the same time panorama keeps the main advantage of the omnidirectional image - a large field of view. The algorithm for generation panoramas from omnidirectional images consists of several steps. Panoramas can be described as projections onto cylinders, spheres, cubes, or other surfaces that surround a viewing point. In practice, the most commonly used cylindrical, spherical and cubic panoramas. So at the first step we describe panoramas field-of-view by creating virtual surface (cylinder, sphere or cube) from matrix of 3d points in virtual object space. Then we create mapping table by finding coordinates of image points for those 3d points on omnidirectional image by using projection function. At the last step we generate panorama pixel-by-pixel image from original omnidirectional image by using of mapping table. In order to find the projection function of omnidirectional camera we used the calibration procedure, developed by Davide Scaramuzza - Omnidirectional Camera Calibration Toolbox for Matlab. After the calibration, the toolbox provides two functions which express the relation between a given pixel point and its projection onto the unit sphere. After first run of the algorithm we obtain mapping table. This mapping table can be used for real time generation of panoramic images with minimal cost of CPU time.

  13. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300 200 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  14. Efficient height measurement method of surveillance camera image.

    PubMed

    Lee, Joong; Lee, Eung-Dae; Tark, Hyun-Oh; Hwang, Jin-Woo; Yoon, Do-Young

    2008-05-01

    As surveillance cameras are increasingly installed, their films are often submitted as evidence of crime, but very scant detailed information such as features and clothes is obtained due to the limited camera performance. Height, however, is relatively not significantly influenced by the camera performance. This paper studied the height measurement method using images from a CCTV. The information on the height was obtained via photogrammetry, including the reference points in the photographed area and the calculation of the relationship between a 3D space and a 2D image through linear and nonlinear calibration. Using this correlation, this paper suggested the height measurement method, which projects a 3D virtual ruler onto the image. This method has been proven to offer more stable values within the range of data convergence than those of other existing methods. PMID:18096339

  15. Wide-Angle, Reflective Strip-Imaging Camera

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H.

    1992-01-01

    Proposed camera images thin, striplike portion of field of view of 180 degrees wide. Hemispherical concave reflector forms image onto optical fibers, which transfers it to strip of photodetectors or spectrograph. Advantages include little geometric distortion, achromatism, and ease of athermalization. Uses include surveillance of clouds, coarse mapping of terrain, measurements of bidirectional reflectance distribution functions of aerosols, imaging spectrometry, oceanography, and exploration of planets.

  16. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  17. Algorithm for haze determination using digital camera images

    NASA Astrophysics Data System (ADS)

    Wong, C. J.; MatJafri, M. Z.; Abdullah, K.; Lim, H. S.; Hashim, S. A.

    2008-04-01

    An algorithm for haze determination was developed based on the atmospheric optical properties to determine the concentration of particulate matter with diameter less than 10 micrometers (PM10). The purpose of this study was to use digital camera images to determine the PM10 concentration. This algorithm was developed based on the relationship between the measured PM10 concentration and the reflected components from a surface material and the atmosphere. A digital camera was used to capture images of dark and bright targets at near and far distances from the position of the targets. Ground-based PM10 measurements were carried out at selected locations simultaneously with the digital camera images acquisition using a DustTrak TM meter. The PCI Geomatica version 9.1 digital image processing software was used in all imageprocessing analyses. The digital colour images were separated into three bands namely red, green and blue for multi-spectral analysis. The digital numbers (DN) for each band corresponding to the ground-truth locations were extracted and converted to radiance and reflectance values. Then the atmospheric reflectance was related to the PM10 using the regression algorithm analysis. The proposed algorithm produced a high correlation coefficient (R) and low root-meansquare error (RMS) between the measured and estimated PM10. This indicates that the technique using the digital camera images can provide a useful tool for air quality studies.

  18. High-Resolution Mars Camera Test Image of Moon (Infrared)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

  19. Direct imaging of exoplanetary systems with a monolithic multispectral camera

    NASA Astrophysics Data System (ADS)

    Hicks, Brian; Oram, Kathleen; Lewis, Nikole; Mendillo, Christopher; Bierden, Paul; Cook, TImothy; Chakrabarti, Supriya

    2013-09-01

    We present a monolithic multispectral camera (MMC) for high contrast direct imaging of inner exoplanetary environments. The primary scientific goal of the camera is to enable eight color characterization of jovian exoplanets and interplanetary dust and debris distributions around nearby stars. Technological highlights of the design include: 1. Diffraction limited resolution at 350 nm through active optical aberration correction; 2. Greater than million-to-one contrast at narrow star separation using interferometry and post-processing techniques; 3. Demonstration of deep broadband interferometric nulling and interband image stability through the use of monolithic optical assemblies; 4. Optimization of multispectral throughput while minimizing components.

  20. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 ?m] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  1. Air Pollution Determination Using a Surveillance Internet Protocol Camera Images

    NASA Astrophysics Data System (ADS)

    Chow Jeng, C. J.; Hwee San, Hslim; Matjafri, M. Z.; Abdullah, Abdul, K.

    Air pollution has long been a problem in the industrial nations of the West It has now become an increasing source of environmental degradation in the developing nations of east Asia Malaysia government has built a network to monitor air pollution But the cost of these networks is high and limits the knowledge of pollutant concentration to specific points of the cities A methodology based on a surveillance internet protocol IP camera for the determination air pollution concentrations was presented in this study The objective of this study was to test the feasibility of using IP camera data for estimating real time particulate matter of size less than 10 micron PM10 in the campus of USM The proposed PM10 retrieval algorithm derived from the atmospheric optical properties was employed in the present study In situ data sets of PM10 measurements and sun radiation measurements at the ground surface were collected simultaneously with the IP camera images using a DustTrak meter and a handheld spectroradiometer respectively The digital images were separated into three bands namely red green and blue bands for multispectral algorithm calibration The digital number DN of the IP camera images were converted into radiance and reflectance values After that the reflectance recorded by the digital camera was subtracted by the reflectance of the known surface and we obtained the reflectance caused by the atmospheric components The atmospheric reflectance values were used for regression analysis Regression technique was employed to determine suitable

  2. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  3. The research of relay lens coupling in image intensified camera

    NASA Astrophysics Data System (ADS)

    Sun, Xin; Hu, Bing-liang; Zou, Chun-bo; Bai, Qing-lan; Wang, Le

    2013-08-01

    Image Intensified CCD (ICCD) camera is widely used in the field of low-light-level image detection. The crucial part of ICCD, coupling component, which realizes the image transmitting between the image intensifier and detector, affects the final performance of the ICCD camera significantly. There are two means of coupling: relay lens and optical fiber taper (OFT). OFT has the merits of small volume and relatively high coupling efficiency, therefore it is commonly used in the portable devices or applications with less precision demands. However, relay lens turns out to be a better solution other than OFT for the applications with no volume and weight restrictions, since it provides higher resolution, perfect image plane uniformity and manufacture flexibility. In this paper, we discuss a methodology of high performance relay lens design and based on the method a solid design is proposed. There are three major merits of the lens design. Firstly, the lens has large object space numerical aperture and thus the coupling efficiency reaches 5% at the magnification of 0.25. Secondly, the lens is telecentric in both sides of object space and image space, this feature guarantees uniform light collection over the field of view and uniform light receiving on the detector plane. Finally, the design can be conveniently optimized to meet the needs of different type of image intensifier. Moreover, the paper presents a prototype ICCD camera and a series of imaging experiment as well. The experiment results prove the validity of the foregoing analysis and optical design.

  4. Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy

    NASA Astrophysics Data System (ADS)

    Hewat, A. W.

    We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel "Kodak" KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.

  5. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  6. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  7. A compact gamma camera for biological imaging

    SciTech Connect

    Bradley, E L; Cella, J; Majewski, S; Popov, V; Qian, Jianguo; Saha, M S; Smith, M F; Weisenberger, A G; Welsh, R E

    2006-02-01

    A compact detector, sized particularly for imaging a mouse, is described. The active area of the detector is approximately 46 mm; spl times/ 96 mm. Two flat-panel Hamamatsu H8500 position-sensitive photomultiplier tubes (PSPMTs) are coupled to a pixellated NaI(Tl) scintillator which views the animal through a copper-beryllium (CuBe) parallel-hole collimator specially designed for {sup 125}I. Although the PSPMTs have insensitive areas at their edges and there is a physical gap, corrections for scintillation light collection at the junction between the two tubes results in a uniform response across the entire rectangular area of the detector. The system described has been developed to optimize both sensitivity and resolution for in-vivo imaging of small animals injected with iodinated compounds. We demonstrate an in-vivo application of this detector, particularly to SPECT, by imaging mice injected with approximately 10-15; spl mu/Ci of {sup 125}I.

  8. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  9. Coincidence ion imaging with a fast frame camera.

    PubMed

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. PMID:25554285

  10. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  11. Coincidence ion imaging with a fast frame camera

    SciTech Connect

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  12. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  13. Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera.

    PubMed

    Shaw, Joseph; Nugent, Paul; Pust, Nathan; Thurairajah, Brentha; Mizutani, Kohei

    2005-07-25

    An uncooled microbolometer-array thermal infrared camera has been incorporated into a remote sensing system for radiometric sky imaging. The radiometric calibration is validated and improved through direct comparison with spectrally integrated data from the Atmospheric Emitted Radiance Interferometer (AERI). With the improved calibration, the Infrared Cloud Imager (ICI) system routinely obtains sky images with radiometric uncertainty less than 0.5 W/(m(2 )sr) for extended deployments in challenging field environments. We demonstrate the infrared cloud imaging technique with still and time-lapse imagery of clear and cloudy skies, including stratus, cirrus, and wave clouds. PMID:19498585

  14. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  15. Sloan Digital Sky Survey imaging camera: design and performance

    NASA Astrophysics Data System (ADS)

    Rockosi, Constance M.; Gunn, James E.; Carr, Michael A.; Sekiguchi, Masaki; Ivezic, Zeljko; Munn, Jeffrey A.

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) imaging camera saw first light in May 1998, and has been in regular operation since the start of the survey in April 2000. We review here key elements in the design of the instrument driven by the specific goals of the survey, and discuss some of the operational issues involved in keeping the instrument ready to observe at all times and in monitoring its performance. We present data on the mechanical and photometric stability of the camera, using on-sky survey data as collected and processed to date.

  16. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  17. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  18. New high spatial resolution portable camera in medical imaging

    NASA Astrophysics Data System (ADS)

    Trotta, C.; Massari, R.; Palermo, N.; Scopinaro, F.; Soluri, A.

    2007-07-01

    In the last years, many studies have been carried out on portable gamma cameras in order to optimize a device for medical imaging. In this paper, we present a new type of gamma camera, for low energies detection, based on a position sensitive photomultiplier tube Hamamatsu Flat Panel H8500 and an innovative technique based on CsI(Tl) scintillation crystals inserted into the square holes of a tungsten collimator. The geometrical features of this collimator-scintillator structure, which affect the camera spatial resolution and sensitivity, were chosen to offer optimal performances in clinical functional examinations. Detector sensitivity, energy resolution and spatial resolution were measured and the acquired image quality was evaluated with particular attention to the pixel identification capability. This low weight (about 2 kg) portable gamma camera was developed thanks to a miniaturized resistive chain electronic readout, combined with a dedicated compact 4 channel ADC board. This data acquisition board, designed by our research group, showed excellent performances, with respect to a commercial PCI 6110E card (National Intruments), in term of sampling period and additional on board operation for data pre-processing.

  19. Camera assembly design proposal for SRF cavity image collection

    SciTech Connect

    Tuozzolo, S.

    2011-10-10

    This project seeks to collect images from the inside of a superconducting radio frequency (SRF) large grain niobium cavity during vertical testing. These images will provide information on multipacting and other phenomena occurring in the SRF cavity during these tests. Multipacting, a process that involves an electron buildup in the cavity and concurrent loss of RF power, is thought to be occurring near the cathode in the SRF structure. Images of electron emission in the structure will help diagnose the source of multipacting in the cavity. Multipacting sources may be eliminated with an alteration of geometric or resonant conditions in the SRF structure. Other phenomena, including unexplained light emissions previously discovered at SLAC, may be present in the cavity. In order to effectively capture images of these events during testing, a camera assembly needs to be installed to the bottom of the RF structure. The SRF assembly operates under extreme environmental conditions: it is kept in a dewar in a bath of 2K liquid helium during these tests, is pumped down to ultra-high vacuum, and is subjected to RF voltages. Because of this, the camera needs to exist as a separate assembly attached to the bottom of the cavity. The design of the camera is constrained by a number of factors that are discussed.

  20. Refocusing images and videos with a conventional compact camera

    NASA Astrophysics Data System (ADS)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  1. Imaging tissues with a polarized light video camera

    NASA Astrophysics Data System (ADS)

    Jacques, Steven L.; Lee, Kenneth

    1999-09-01

    A method for imaging the superficial epidermal and papillary dermal layers of the skin is needed when assessing many skin lesions. We have developed an imaging modality using a video camera whose mechanism of contrast is the reflectance of polarized light from superficial skin. By selecting only polarized light to create the image, one rejects the large amount of diffusely reflected light from the deeper dermis. The specular reflectance (or glare) from the skin surface is also avoided in the setup. The resulting polarization picture maximally accents the details of the superficial layer of the skin and removes the effects of melanin pigmentation from the image. For example, freckles simply disappear and nevi lose their dark pigmentation to reveal the details of abnormal cellular growth. An initial clinical study demonstrated that the polarization camera could identify the margins of sclerosing basal cell carcinoma while the eye of the doctor underestimated the margin estimate. The camera identified an 11-mm-diameter lesion while the unaided eye identified a 6-mm-diameter lesion.

  2. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  3. Scalar wave-optical reconstruction of plenoptic camera images.

    PubMed

    Junker, André; Stenau, Tim; Brenner, Karl-Heinz

    2014-09-01

    We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects. PMID:25321378

  4. A laser-based multiformat camera for medical imaging.

    PubMed

    Sanders, J N; Cattell, C L; Bender, N E; Tesic, M M; Sones, R A

    1986-01-01

    The spatial resolution and contrast resolution required of a multiformat camera (MFC) for medical imaging are discussed. A typical cathode-ray tube (CRT) MFC and a prototype laser MFC are compared based on the following measured quantities: line spread function and associated contrast transfer function, noise characteristics, intensity transfer function (dynamic range), large-area contrast, and film irradiance. The laser MFC is found to provide significantly better performance than the CRT MFC in all of these areas. PMID:3951414

  5. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  6. Camera system resolution and its influence on digital image correlation

    SciTech Connect

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss of spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.

  7. Camera system resolution and its influence on digital image correlation

    DOE PAGESBeta

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  8. LROC WAC 100 Meter Scale Photometrically Normalized Map of the Moon

    NASA Astrophysics Data System (ADS)

    Boyd, A. K.; Nuno, R. G.; Robinson, M. S.; Denevi, B. W.; Hapke, B. W.

    2013-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) monthly global observations allowed derivation of a robust empirical photometric solution over a broad range of incidence, emission and phase (i, e, g) angles. Combining the WAC stereo-based GLD100 [1] digital terrain model (DTM) and LOLA polar DTMs [2] enabled precise topographic corrections to photometric angles. Over 100,000 WAC observations at 643 nm were calibrated to reflectance (I/F). Photometric angles (i, e, g), latitude, and longitude were calculated and stored for each WAC pixel. The 6-dimensional data set was then reduced to 3 dimensions by photometrically normalizing I/F with a global solution similar to [3]. The global solution was calculated from three 2x2 tiles centered on (1N, 147E), (45N, 147E), and (89N, 147E), and included over 40 million WAC pixels. A least squares fit to a multivariate polynomial of degree 4 (f(i,e,g)) was performed, and the result was the starting point for a minimum search solving the non-linear function min[{1-[ I/F / f(i,e,g)] }2]. The input pixels were filtered to incidence angles (calculated from topography) < 89 and I/F greater than a minimum threshold to avoid shadowed pixels, and the output normalized I/F values were gridded into an equal-area map projection at 100 meters/pixel. At each grid location the median, standard deviation, and count of valid pixels were recorded. The normalized reflectance map is the result of the median of all normalized WAC pixels overlapping that specific 100-m grid cell. There are an average of 86 WAC normalized I/F estimates at each cell [3]. The resulting photometrically normalized mosaic provides the means to accurately compare I/F values for different regions on the Moon (see Nuno et al. [4]). The subtle differences in normalized I/F can now be traced across the local topography at regions that are illuminated at any point during the LRO mission (while the WAC was imaging), including at polar latitudes. This continuous map of reflectance at 643 nm, normalized to a standard geometry of i=30, e=0, g=30, ranges from 0.036 to 0.36 (0.01%-99.99% of the histogram) with a global mean reflectance of 0.115. Immature rays of Copernican craters are typically >0.14 and maria are typically <0.07 with averages for individual maria ranging from 0.046 to 0.060. The materials with the lowest normalized reflectance on the Moon are pyroclastic deposits at Sinus Aestuum (<0.036) and those with the highest normalized reflectance are found on steep crater walls (>0.36)[4]. 1. Scholten et al. (2012) J. Geophys. Res., 117, doi: 10.1029/2011JE003926. 2. Smith et al. (2010), Geophys. Res. Lett., 37, L18204, doi:10.1029/2010GL043751. 3. Boyd et al. (2012) LPSC XLIII, #2795 4. Nuno et al. AGU, (this conference)

  9. First Halley multicolour camera imaging results from Giotto

    NASA Technical Reports Server (NTRS)

    Keller, H. U.; Arpigny, C.; Barbieri, C.; Bonnet, R. M.; Cazes, S.

    1986-01-01

    The Giotto spacecraft's Halley Multicolor Camera imaging results have furnished flyby images that are centered on the brightest part of the inner coma; these show the silouette of a large, solid and irregularly shaped cometary nucleus and jetlike dust activity. The preliminary assessment of these data has yielded information on the dimensions and shape of the nucleus and dust emission activity. It is noted that only minor parts of the surface are active, with most of the surface being covered by a nonvolatile material. Dust jets dominate the inner coma, and are restricted to a subsolar hemisphere.

  10. An efficient image compressor for charge coupled devices camera.

    PubMed

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p -norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  11. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the lp-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  12. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L. (Albuquerque, NM); Hoover, Eddie R. (Sandia Park, NM); Pain, Bedabrata (Los Angeles, CA); Hancock, Bruce R. (Altadena, CA); Nellums, Robert O. (Albuquerque, NM)

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  13. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure. PMID:20876019

  14. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.

  15. CMOS image sensor noise reduction method for image signal processor in digital cameras and camera phones

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, SeongDeok; Choe, Wonhee; Kim, Chang-Yong

    2007-02-01

    Digital images captured from CMOS image sensors suffer Gaussian noise and impulsive noise. To efficiently reduce the noise in Image Signal Processor (ISP), we analyze noise feature for imaging pipeline of ISP where noise reduction algorithm is performed. The Gaussian noise reduction and impulsive noise reduction method are proposed for proper ISP implementation in Bayer domain. The proposed method takes advantage of the analyzed noise feature to calculate noise reduction filter coefficients. Thus, noise is adaptively reduced according to the scene environment. Since noise is amplified and characteristic of noise varies while the image sensor signal undergoes several image processing steps, it is better to remove noise in earlier stage on imaging pipeline of ISP. Thus, noise reduction is carried out in Bayer domain on imaging pipeline of ISP. The method is tested on imaging pipeline of ISP and images captured from Samsung 2M CMOS image sensor test module. The experimental results show that the proposed method removes noise while effectively preserves edges.

  16. The role of camera-bundled image management software in the consumer digital imaging value chain

    NASA Astrophysics Data System (ADS)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  17. PIV camera response to high frequency signal: comparison of CCD and CMOS cameras using particle image simulation

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Stanislas, M.; Coudert, S.

    2014-08-01

    We present a quantitative comparison between FlowMaster3 CCD and Phantom V9.1 CMOS cameras response in the scope of application to particle image velocimetry (PIV). First, the subpixel response is characterized using a specifically designed set-up. The crosstalk between adjacent pixels for the two cameras is then estimated and compared. Then, the camera response is experimentally characterized using particle image simulation. Based on a three-point Gaussian peak fitting, the bias and RMS errors between locations of simulated and real images for the two cameras are accurately calculated using a homemade program. The results show that, although the pixel response is not perfect, the optical crosstalk between adjacent pixels stays relatively low and the accuracy of the position determination of an ideal PIV particle image is much better than expected.

  18. Imaging of Venus from Galileo - Early results and camera performance

    NASA Technical Reports Server (NTRS)

    Belton, M. J. S.; Gierasch, P.; Klaasen, K. P.; Anger, C. D.; Carr, M. H.; Chapman, C. R.; Davies, M. E.; Greeley, R.; Greenberg, R.; Head, J. W.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3 percent, is different from that recorded at short wavelengths. In particular, the 'polar collar', which is omnipresent in short wavelength images, is absent at 9900 A. The maximum contrast in the features at 4200 A is about 20 percent. The optical performance of the camera is described and is judged to be nominal.

  19. Imaging of Venus from Galileo: Early results and camera performance

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.; Klaasen, K.P.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Fanale, F.P.; Ingersoll, A.P.; Pollock, J.B.; Morrison, D.; Clary, M.C.; Cunningham, W.; Breneman, H.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A??, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3%, is different from that recorded at short wavelengths. In particular, the "polar collar," which is omnipresent in short wavelength images, is absent at 9900 A??. The maximum contrast in the features at 4200 A?? is about 20%. The optical performance of the camera is described and is judged to be nominal. ?? 1992.

  20. Uncooled detector, optics, and camera development for THz imaging

    NASA Astrophysics Data System (ADS)

    Pope, Timothy; Doucet, Michel; Dupont, Fabien; Marchese, Linda; Tremblay, Bruno; Baldenberger, Georges; Verrault, Sonia; Lamontagne, Frdric

    2009-05-01

    A prototype THz imaging system based on modified uncooled microbolometer detector arrays, INO MIMICII camera electronics, and a custom f/1 THz optics has been assembled. A variety of new detector layouts and architectures have been designed; the detector THz absorption was optimized via several methods including integration of thin film metallic absorbers, thick film gold black absorbers, and antenna structures. The custom f/1 THz optics is based on high resistivity floatzone silicon with parylene anti-reflection coating matched to the wavelength region of interest. The integrated detector, camera electronics, and optics are combined with a 3 THz quantum cascade laser for initial testing and evaluation. Future work will include the integration of fully optimized detectors and packaging and the evaluation of the achievable NEP with an eye to future applications such as industrial inspection and stand-off detection.

  1. Dual-camera design for coded aperture snapshot spectral imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:25967796

  2. Touchless sensor capturing five fingerprint images by one rotating camera

    NASA Astrophysics Data System (ADS)

    Noh, Donghyun; Choi, Heeseung; Kim, Jaihie

    2011-11-01

    Conventional touch-based sensors cannot capture the fingerprint images of all five fingers simultaneously due to their flat surfaces because the view of the thumb is not parallel to the other fingers. In addition, touch-based sensors have inherent difficulties, including variations in captured images due to partial contact, nonlinear distortion, inconsistent image quality, and latent images. These degrade recognition performance and user acceptance to using sensors. To overcome these difficulties, we propose a device that adopts a contact-free structure composed of a charge-coupled device (CCD) camera, rotating mirror equipped with a stepping motor, and a green light-emitting diode (LED) illuminator. The device does not make contact with any finger and captures all five fingerprint images simultaneously. We describe and discuss the structure of the proposed device in terms of four aspects: the quality of captured images, verification performance, compatibility with existing touch-based sensors, and ease of use. The experimental results show that the proposed device can capture all five fingerprint images with a high throughput (in 2.5 s), as required at the immigration control office of a country. Also, on average, a captured touchless image takes 57% of a whole rolled image whereas the image captured from a conventional touch-based sensor only takes 41% of a whole rolled image, and they have 63 and 40 true minutiae on average, respectively. Even though touchless images contain 13.18-deg rolling and 9.18-deg pitching distortion on average, 0% equal error rate (EER) is obtained by using five fingerprint images in verification stage.

  3. High-resolution light field cameras based on a hybrid imaging system

    NASA Astrophysics Data System (ADS)

    Dai, Feng; Lu, Jing; Ma, Yike; Zhang, Yongdong

    2014-11-01

    Compared to traditional digital cameras, light field (LF) cameras measure not only the intensity of rays, but also their light field information. As LF cameras trade a good deal of spatial resolution for extra angular information, they provide lower spatial resolution than traditional digital cameras. In this paper, we show a hybrid imaging system consisting of a LF camera and a high-resolution traditional digital camera, achieving both high spatial resolution and high angular resolution. We build an example prototype using a Lytro camera and a DSLR camera to generate a LF image with 10 megapixel spatial resolution and get high-resolution digital refocused images, multi-view images and all-focused images.

  4. Expert interpretation compensates for reduced image quality of camera-digitized images referred to radiologists.

    PubMed

    Zwingenberger, Allison L; Bouma, Jennifer L; Saunders, H Mark; Nodine, Calvin F

    2011-01-01

    We compared the accuracy of five veterinary radiologists when reading 20 radiographic cases on both analog film and in camera-digitized format. In addition, we compared the ability of five veterinary radiologists vs. 10 private practice veterinarians to interpret the analog images. Interpretation accuracy was compared using receiver operating characteristic curve analysis. Veterinary radiologists' accuracy did not significantly differ between analog vs. camera-digitized images (P = 0.13) although sensitivity was higher for analog images. Radiologists' interpretation of both digital and analog images was significantly better compared with the private veterinarians (P < 0.05). PMID:21831251

  5. Single-quantum dot imaging with a photon counting camera

    PubMed Central

    Michalet, X.; Colyer, R. A.; Antelman, J.; Siegmund, O.H.W.; Tremsin, A.; Vallerga, J.V.; Weiss, S.

    2010-01-01

    The expanding spectrum of applications of single-molecule fluorescence imaging ranges from fundamental in vitro studies of biomolecular activity to tracking of receptors in live cells. The success of these assays has relied on progresses in organic and non-organic fluorescent probe developments as well as improvements in the sensitivity of light detectors. We describe a new type of detector developed with the specific goal of ultra-sensitive single-molecule imaging. It is a wide-field, photon-counting detector providing high temporal and high spatial resolution information for each incoming photon. It can be used as a standard low-light level camera, but also allows access to a lot more information, such as fluorescence lifetime and spatio-temporal correlations. We illustrate the single-molecule imaging performance of our current prototype using quantum dots and discuss on-going and future developments of this detector. PMID:19689323

  6. A two-camera imaging system for pest detection and aerial application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation reports on the design and testing of an airborne two-camera imaging system for pest detection and aerial application assessment. The system consists of two digital cameras with 5616 x 3744 effective pixels. One camera captures normal color images with blue, green and red bands, whi...

  7. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  8. On image sensor dynamic range utilized by security cameras

    NASA Astrophysics Data System (ADS)

    Johannesson, Anders

    2012-03-01

    The dynamic range is an important quantity used to describe an image sensor. Wide/High/Extended dynamic range is often brought forward as an important feature to compare one device to another. The dynamic range of an image sensor is normally given as a single number, which is often insufficient since a single number will not fully describe the dynamic capabilities of the sensor. A camera is ideally based on a sensor that can cope with the dynamic range of the scene. Otherwise it has to sacrifice some part of the available data. For a security camera the latter may be critical since important objects might be hidden in the sacrificed part of the scene. In this paper we compare the dynamic capabilities of some image sensors utilizing a visual tool. The comparison is based on the use case, common in surveillance, where low contrast objects may appear in any part of a scene that through its uneven illumination, span a high dynamic range. The investigation is based on real sensor data that has been measured in our lab and a synthetic test scene is used to mimic the low contrast objects. With this technique it is possible to compare sensors with different intrinsic dynamic properties as well as some capture techniques used to create an effect of increased dynamic range.

  9. Volcanic plume characteristics determined using an infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Lopez, T.; Thomas, H. E.; Prata, A. J.; Amigo, A.; Fee, D.; Moriano, D.

    2015-07-01

    Measurements of volcanic emissions (ash and SO2) from small-sized eruptions at three geographically dispersed volcanoes are presented from a novel, multichannel, uncooled imaging infrared camera. Infrared instruments and cameras have been used previously at volcanoes to study lava bodies and to assess plume dynamics using high temperature sources. Here we use spectrally resolved narrowband (~ 0.5-1 ?m bandwidth) imagery to retrieve SO2 and ash slant column densities (g m- 2) and emission rates or fluxes from infrared thermal imagery at close to ambient atmospheric temperatures. The relatively fast sampling (0.1-0.5 Hz) of the multispectral imagery and the fast sampling (~ 1 Hz) of single channel temperature data permit analysis of some aspects of plume dynamics. Estimations of SO2 and ash mass fluxes, and total slant column densities of SO2 and fine ash in individual small explosions from Stromboli (Italy) and Karymsky (Russia), and total SO2 slant column densities and fluxes from Lscar (Chile) volcanoes, are provided. We evaluate the temporal evolution of fine ash particle sizes in ash-rich explosions at Stromboli and Karymsky and use these observations to infer the presence of at least two distinct fine ash modes, with mean radii of < 10 ?m and > 10 ?m. The camera and techniques detailed here provide a tool to quickly and remotely estimate fluxes of fine ash and SO2 gas and characterize eruption size.

  10. Imaging the seizure during surgery with a hyperspectral camera.

    PubMed

    Noordmans, Herke Jan; Ferrier, Cyrille; de Roode, Rowland; Leijten, Frans; van Rijen, Peter; Gosselaar, Peter; Klaessens, John; Verdaasdonk, Ruud

    2013-11-01

    An epilepsy patient with recurring sensorimotor seizures involving the left hand every 10 min, was imaged with a hyperspectral camera during surgery. By calculating the changes in oxygenated, deoxygenated blood, and total blood volume in the cortex, a focal increase in oxygenated and total blood volume could be observed in the sensory cortex, corresponding to the seizure-onset zone defined by intracranial electroencephalography (EEG) findings. This probably reflects very local seizure activity. After multiple subpial transections in this motor area, clinical seizures abated. PMID:24199829

  11. Ceres Photometry and Albedo from Dawn Framing Camera Images

    NASA Astrophysics Data System (ADS)

    Schrder, S. E.; Mottola, S.; Keller, H. U.; Li, J.-Y.; Matz, K.-D.; Otto, K.; Roatsch, T.; Stephan, K.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    The Dawn spacecraft is in orbit around dwarf planet Ceres. The onboard Framing Camera (FC) [1] is mapping the surface through a clear filter and 7 narrow-band filters at various observational geometries. Generally, Ceres' appearance in these images is affected by shadows and shading, effects which become stronger for larger solar phase angles, obscuring the intrinsic reflective properties of the surface. By means of photometric modeling we attempt to remove these effects and reconstruct the surface albedo over the full visible wavelength range. Knowledge of the albedo distribution will contribute to our understanding of the physical nature and composition of the surface.

  12. MECHANICAL ADVANCING HANDLE THAT SIMPLIFIES MINIRHIZOTRON CAMERA REGISTRATION AND IMAGE COLLECTION

    EPA Science Inventory

    Minirkizotrons in conjunction with a minirkizotron video camera system are becoming widely used tools for investigating root production and survical in a variety of ecosystems. Image collection with a minirhizotron camera can be time consuming and tedious particularly when hundre...

  13. Noise evaluation of Compton camera imaging for proton therapy.

    PubMed

    Ortega, P G; Torres-Espallardo, I; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-03-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector. PMID:25658644

  14. Noise evaluation of Compton camera imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llos, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming ? energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.

  15. Image reconstruction methods for the PBX-M pinhole camera

    SciTech Connect

    Holland, A.; Powell, E.T.; Fonck, R.J.

    1990-03-01

    This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs.

  16. Image quality assessment of 2-chip color camera in comparison with 1-chip color and 3-chip color cameras in various lighting conditions: initial results

    NASA Astrophysics Data System (ADS)

    Adham Khiabani, Sina; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    A 2-chip color camera, named UNB Super-camera, is introduced in this paper. Its image qualities in different lighting conditions are compared with those of a 1-chip color camera and a 3-chip color camera. The 2-chip color camera contains a high resolution monochrome (panchromatic) sensor and a low resolution color sensor. The high resolution color images of the 2-chip color camera are produced through an image fusion technique: UNB pan-sharp, also named FuzeGo. This fusion technique has been widely used to produce high resolution color satellite images from a high resolution panchromatic image and low resolution multispectral (color) image for a decade. Now, the fusion technique is further extended to produce high resolution color still images and video images from a 2-chip color camera. The initial quality assessments of a research project proved that the light sensitivity, image resolution and color quality of the Super-camera (2-chip camera) is obviously better than those of the same generation 1-chip camera. It is also proven that the image quality of the Super-camera is much better than the same generation 3-chip camera when the light is low, such as in a normal room light condition or darker. However, the resolution of the Super-camera is the same as that of the 3- chip camera, these evaluation results suggest the potential of using 2-chip camera to replace 3-chip camera for capturing high quality color images, which is not only able to lower the cost of camera manufacture but also significantly improving the light sensitivity.

  17. Ceres Survey Atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2016-02-01

    The Dawn Framing Camera (FC) acquired almost 900 clear filter images of Ceres with a resolution of about 400 m/pixels during the seven cycles in the Survey orbit in June 2015. We ortho-rectified 42 images from the third cycle and produced a global, high-resolution, controlled mosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 3 tiles mapped at a scale of 1:2,000,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The whole atlas is available to the public through the Dawn GIS web page.

  18. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  19. Embedded image enhancement for high-throughput cameras

    NASA Astrophysics Data System (ADS)

    Geerts, Stan J. C.; Cornelissen, Dion; de With, Peter H. N.

    2014-03-01

    This paper presents image enhancement for a novel Ultra-High-Definition (UHD) video camera offering 4K images and higher. Conventional image enhancement techniques need to be reconsidered for the high-resolution images and the low-light sensitivity of the new sensor. We study two image enhancement functions and evaluate and optimize the algorithms for embedded implementation in programmable logic (FPGA). The enhancement study involves high-quality Auto White Balancing (AWB) and Local Contrast Enhancement (LCE). We have compared multiple algorithms from literature, both with objective and subjective metrics. In order to objectively compare Local Contrast (LC), an existing LC metric is modified for LC measurement in UHD images. For AWB, we have found that color histogram stretching offers a subjective high image quality and it is among the algorithms with the lowest complexity, while giving only a small balancing error. We impose a color-to-color gain constraint, which improves robustness of low-light images. For local contrast enhancement, a combination of contrast preserving gamma and single-scale Retinex is selected. A modified bilateral filter is designed to prevent halo artifacts, while significantly reducing the complexity and simultaneously preserving quality. We show that by cascading contrast preserving gamma and single-scale Retinex, the visibility of details is improved towards the level appropriate for high-quality surveillance applications. The user is offered control over the amount of enhancement. Also, we discuss the mapping of those functions on a heterogeneous platform to come to an effective implementation while preserving quality and robustness.

  20. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  1. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2015-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  2. LROC NAC Photometry as a Tool for Studying Physical and Compositional Properties of the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Clegg, R. N.; Jolliff, B. L.; Boyd, A. K.; Stopar, J. D.; Sato, H.; Robinson, M. S.; Hapke, B. W.

    2014-10-01

    LROC NAC photometry has been used to study the effects of rocket exhaust on lunar soil properties, and here we apply the same photometric methods to place compositional constraints on regions of silicic volcanism and pure anorthosite on the Moon.

  3. Distribution and Chronostratigraphy of Ejecta Complexes in the Humorum Basin Mapped from LROC and Lidar Data

    NASA Astrophysics Data System (ADS)

    Ambrose, W. A.

    2011-03-01

    LROC data reveal a population of small-scale ejecta features (asymmetric secondary craters, scours, and crater chains) in the Humorum Basin. These features are used to refine chronostratigraphic ages of landforms in the basin and basin margin.

  4. VME image acquisition and processing using standard TV CCD cameras

    NASA Astrophysics Data System (ADS)

    Epaud, F.; Verdier, P.

    1994-12-01

    The ESRF has released the first version of a low-cost image acquisition and processing system based on a industrial VME board and commercial CCD TV cameras. The images from standard CCIR (625 lines) or EIA (525 lines) inputs are digitised with 8-bit dynamic range and stored in a general purpose frame buffer to be processed by the embedded firmware. They can also be transferred to a UNIX workstation through the network for display in a X11 window, or stored in a file for off-line processing with image analysis packages like KHOROS, IDL, etc. The front-end VME acquisition system can be controlled with a Graphic Users' Interface (GUI) based on X11/Motif running under UNIX. The first release of the system is in operation and allows one to observe and analyse beam spots around the accelerators. The system has been extended to make it possible to position a mu sample (less than 10 ?m 2) not visible to the naked eye. This system is a general purpose image acquisition system which may have wider applications.

  5. Arthropod eye-inspired digital camera with unique imaging characteristics

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-06-01

    In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  6. Modeling camera orientation and 3D structure from a sequence of images taken by a perambulating commercial video camera

    NASA Astrophysics Data System (ADS)

    M-Rouhani, Behrouz; Anderson, James A. D. W.

    1997-04-01

    In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

  7. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  8. Solid state television camera has no imaging tube

    NASA Technical Reports Server (NTRS)

    Huggins, C. T.

    1972-01-01

    Camera with characteristics of vidicon camera and greater resolution than home TV receiver uses mosaic of phototransistors. Because of low power and small size, camera has many applications. Mosaics can be used as cathode ray tubes and analog-to-digital converters.

  9. Why do the image widths from the various cameras change?

    Atmospheric Science Data Center

    2014-12-08

    ... are different because the focal lengths of the MISR cameras change in relationship to the varying distance to the Earth for the different ... the D, C, B, and off-nadir A cameras are chosen so that each pixel is 275 m wide. However, the nadir A camera uses the same focal length as ...

  10. A Hybrid Camera for simultaneous imaging of gamma and optical photons

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bassford, D. J.; Blake, O. E.; Blackshaw, P. E.; Perkins, A. C.

    2012-06-01

    We present a new concept for a medical imaging system, the Hybrid Mini Gamma Camera (HMGC). This combines an optical and a gamma-ray camera in a co-aligned configuration that offers high spatial resolution multi-modality imaging for superimposition of a scintigraphic image on an optical image. This configuration provides visual identification of the sites of localisation of radioactivity that would be especially suited to medical imaging. An extension of the new concept using two hybrid cameras (The StereoScope) offers the potential for stereoscopic imaging with depth estimation for a gamma emitting source.

  11. Development of a dual modality imaging system: a combined gamma camera and optical imager.

    PubMed

    Jung, Jin Ho; Choi, Yong; Hong, Key Jo; Min, Byung Jun; Choi, Joon Young; Choe, Yearn Seong; Lee, Kyung-Han; Kim, Byung-Tae

    2009-07-21

    Several groups have reported the development of dual modality Gamma camera/optical imagers, which are useful tools for investigating biological processes in experimental animals. While previously reported dual modality imaging instrumentation usually employed a separated gamma camera and optical imager, we designed a detector using a position sensitive photomultiplier tube (PSPMT) that is capable of imaging both gamma rays and optical photons for combined gamma camera and optical imager. The proposed system consists of a parallel-hole collimator, an array-type crystal and a PSPMT. The top surface of the collimator and array crystals is left open to allow optical photons to reach the PSPMT. Pulse height spectra and planar images were obtained using a Tc-99m source and a green LED to estimate gamma and optical imaging performances. When both gamma rays and optical photon signals were detected, the signal interferences caused by each other signal were evaluated. A mouse phantom and an ICR mouse containing a gamma ray and optical photon source were imaged to assess the imaging capabilities of the system. The sensitivity, energy resolution and spatial resolution of the gamma image acquired using Tc-99m were 1.1 cps/kBq, 26% and 2.1 mm, respectively. The spatial resolution of the optical image acquired with an LED was 3.5 mm. Signal-to-signal interference due to the optical photon signal in the gamma pulse height spectrum was negligible. However, the pulse height spectrum of the optical photon signal was found to be affected by the gamma signal, and was obtained between signals generated by gamma rays with a correction using a veto gate. Gamma ray and optical photon images of the mouse phantom and ICR mouse were successfully obtained using the single detector. The experimental results indicated that both optical photon and gamma ray imaging are feasible using a detector based on the proposed PSPMT. PMID:19556682

  12. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    SciTech Connect

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  13. Effect of detector parameters on the image quality of Compton camera for 99mTc

    NASA Astrophysics Data System (ADS)

    An, S. H.; Seo, H.; Lee, J. H.; Lee, C. S.; Lee, J. S.; Kim, C. H.

    2007-02-01

    The Compton camera has a bright future as a medical imaging device considering its compactness, low patient dose, multiple-radioisotope tracing capability, inherent three dimensional (3D) imaging capability at a fixed position, etc. Currently, however, the image resolution of the Compton camera is not sufficient for medical imaging. In this study, we investigated the influence of various detector parameters on the image quality of the Compton camera for 99mTc with GEANT4. Our result shows that the segmentation of the detectors significantly affects the image resolution of the Compton camera. The energy discrimination of the detectors was found to significantly affect both the sensitivity and spatial resolution. The use of a higher energy gamma source (e.g., 18F emitting 511 keV photons), however, will significantly improve the spatial resolution of the Compton camera. It will also minimize the effect of the detector energy resolution.

  14. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  15. A low-cost dual-camera imaging system for aerial applicators

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  16. A CCD CAMERA-BASED HYPERSPECTRAL IMAGING SYSTEM FOR STATIONARY AND AIRBORNE APPLICATIONS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes a charge coupled device (CCD) camera-based hyperspectral imaging system designed for both stationary and airborne remote sensing applications. The system consists of a high performance digital CCD camera, an imaging spectrograph, an optional focal plane scanner, and a PC comput...

  17. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  18. Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser.

    PubMed

    Golub, Michael A; Averbuch, Amir; Nathan, Menachem; Zheludev, Valery A; Hauser, Jonathan; Gurevitch, Shay; Malinsky, Roman; Kagan, Asaf

    2016-01-20

    We propose a spectral imaging method that allows a regular digital camera to be converted into a snapshot spectral imager by equipping the camera with a dispersive diffuser and with a compressed sensing-based algorithm for digital processing. Results of optical experiments are reported. PMID:26835914

  19. A new testing method of SNR for cooled CCD imaging camera based on stationary wavelet transform

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Qianshun; Yu, Feihong

    2013-08-01

    Cooled CCD (charge coupled device) imaging camera has found wide application in the field of astronomy, color photometry, spectroscopy, medical imaging, densitometry, chemiluminescence and epifluorescence imaging. A Cooled CCD (CCCD) imaging camera differs from traditional CCD/CMOS imaging camera in that Cooled CCD imaging camera can get high resolution image even in the low illumination environment. SNR (signal noise ratio) is most popular parameter of digital image quality evaluation. Many researchers have proposed various SNR testing methods for traditional CCD imaging camera, however, which is seldom suitable to Cooled CCD imaging camera because of different main noise source. In this paper, a new testing method of SNR is proposed to evaluate the quality of image captured by Cooled CCD. Stationary Wavelet Transform (SWT) is introduced in the testing method for getting more exact image SNR value. The method proposed take full advantage of SWT in the image processing, which makes the experiment results accuracy and reliable. To further refining SNR testing results, the relation between SNR and integration time is also analyzed in this article. The experimental results indicate that the testing method proposed accords with the SNR model of CCCD. In addition, the testing values for one system are about one value, which show that the proposed testing method is robust.

  20. Megapixel mythology and photospace: estimating photospace for camera phones from large image sets

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror O.; Hertel, Dirk W.

    2008-01-01

    It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.

  1. High Performance Imaging Streak Camera for the National Ignition Facility

    SciTech Connect

    Opachich, Y. P.; Kalantar, D.; MacPhee, A.; Holder, J.; Kimbrough, J.; Bell, P. M.; Bradley, D.; Hatch, B.; Brown, C.; Landen, O.; Perfect, B. H.; Guidry, B.; Mead, A.; Charest, M.; Palmer, N.; Homoelle, D.; Browning, D.; Silbernagel, C.; Brienza-Larsen, G.; Griffin, M.; Lee, J. J.; Haugh, M. J.

    2012-01-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high EMI. A train of temporal UV timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented.

  2. High performance imaging streak camera for the National Ignition Facility.

    PubMed

    Opachich, Y P; Kalantar, D H; MacPhee, A G; Holder, J P; Kimbrough, J R; Bell, P M; Bradley, D K; Hatch, B; Brienza-Larsen, G; Brown, C; Brown, C G; Browning, D; Charest, M; Dewald, E L; Griffin, M; Guidry, B; Haugh, M J; Hicks, D G; Homoelle, D; Lee, J J; Mackinnon, A J; Mead, A; Palmer, N; Perfect, B H; Ross, J S; Silbernagel, C; Landen, O

    2012-12-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high electromagnetic interference. A train of temporal ultra-violet timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented. PMID:23278024

  3. Analysis of a multiple reception model for processing images from the solid-state imaging camera

    NASA Technical Reports Server (NTRS)

    Yan, T.-Y.

    1991-01-01

    A detection model to identify the presence of Galileo optical communications from an Earth-based Transmitter (GOPEX) signal by processing multiple signal receptions extracted from the camera images is described. The model decomposes a multi-signal reception camera image into a set of images so that the location of the pixel being illuminated is known a priori and the laser can illuminate only one pixel at each reception instance. Numerical results show that if effects on the pointing error due to atmospheric refraction can be controlled to between 20 to 30 microrad, the beam divergence of the GOPEX laser should be adjusted to be between 30 to 40 microrad when the spacecraft is 30 million km away from Earth. Furthermore, increasing beyond 5 the number of receptions for processing will not produce a significant detection probability advantage.

  4. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  5. Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)

    NASA Technical Reports Server (NTRS)

    Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie

    2011-01-01

    The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.

  6. Cloud detection with the Earth Polychromatic Imaging Camera (EPIC)

    NASA Astrophysics Data System (ADS)

    Meyer, K.; Marshak, A.; Lyapustin, A.; Torres, O.; Wang, Y.

    2011-12-01

    The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.

  7. Simulation of light-field camera imaging based on ray splitting Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Yuan, Yuan; Li, Sai; Shuai, Yong; Tan, He-Ping

    2015-11-01

    As microlens technology matures, studies of structural design and reconstruction algorithm optimization for light-field cameras are increasing. However, few of these studies address numerical physical simulation of the camera, and it is difficult to track lighting technology for forward simulations because of its low efficiency. In this paper, we develop a Monte Carlo method (MCM) based on ray splitting and build a physical model of a light-field camera with a microlens array to simulate its imaging and refocusing processes. The model enables simulation of different imaging modalities, and will be useful for camera structural design and error analysis system construction.

  8. Free-viewpoint image generation from a video captured by a handheld camera

    NASA Astrophysics Data System (ADS)

    Takeuchi, Kota; Fukushima, Norishige; Yendo, Tomohiro; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Tanimoto, Masayuki

    2011-03-01

    In general, free-viewpoint image is generated by captured images by a camera array aligned on a straight line or circle. A camera array is able to capture synchronized dynamic scene. However, camera array is expensive and requires great care to be aligned exactly. In contrast to camera array, a handy camera is easily available and can capture a static scene easily. We propose a method that generates free-viewpoint images from a video captured by a handheld camera in a static scene. To generate free-viewpoint images, view images from several viewpoints and information of camera pose/positions of these viewpoints are needed. In a previous work, a checkerboard pattern has to be captured in every frame to calculate these parameters. And in another work, a pseudo perspective projection is assumed to estimate parameters. This assumption limits a camera movement. However, in this paper, we can calculate these parameters by "Structure From Motion". Additionally, we propose a selection method for reference images from many captured frames. And we propose a method that uses projective block matching and graph-cuts algorithm with reconstructed feature points to estimate a depth map of a virtual viewpoint.

  9. A new nuclear medicine scintillation camera based on image-intensifier tubes.

    PubMed

    Mulder, H; Pauwels, E K

    1976-11-01

    A large-field scintilation camera for nuclear medicine application has recently been developed by Old Delft. The system is based on a large-field image-intensifier tube preceded by a scintillator mosaic. A comparison is made with present state-of-the-art scintillation cameras in terms of modulation transfer function (MTF) and sensitivity. These parameters, which determine the performance of scintillation cameras, are not independent of each other. Therefore, a comparative evaluation should be made under well-defined and identical conditions. The new scintillation camera achieves considerable improvement in image quality. In fact, the intrinsic MTF of the new camera is rather close to unity in the spatial frequency range up to 1 line pair per centimeter (1p/cm). Further improvement would require a fundamentally new approach to gamma imaging, free of the limitations of conventional collimators (e.g., coded-aperture imaging techniques). PMID:978249

  10. Modelling of Camera Phone Capture Channel for JPEG Colour Barcode Images

    NASA Astrophysics Data System (ADS)

    Tan, Keng T.; Ong, Siong Khai; Chai, Douglas

    As camera phones have permeated into our everyday lives, two dimensional (2D) barcode has attracted researchers and developers as a cost-effective ubiquitous computing tool. A variety of 2D barcodes and their applications have been developed. Often, only monochrome 2D barcodes are used due to their robustness in an uncontrolled operating environment of camera phones. However, we are seeing an emerging use of colour 2D barcodes for camera phones. Nonetheless, using a greater multitude of colours introduces errors that can negatively affect the robustness of barcode reading. This is especially true when developing a 2D barcode for camera phones which capture and store these barcode images in the baseline JPEG format. This paper present one aspect of the errors introduced by such camera phones by modelling the camera phone capture channel for JPEG colour barcode images.

  11. A hybrid version of the Whipple observatory's air Cherenkov imaging camera for use in moonlight

    NASA Astrophysics Data System (ADS)

    Chantell, M. C.; Akerlof, C. W.; Badran, H. M.; Buckley, J.; Carter-Lewis, D. A.; Cawley, M. F.; Connaughton, V.; Fegan, D. J.; Fleury, P.; Gaidos, J.; Hillas, A. M.; Lamb, R. C.; Pare, E.; Rose, H. J.; Rovero, A. C.; Sarazin, X.; Sembroski, G.; Schubnell, M. S.; Urban, M.; Weekes, T. C.; Wilson, C.

    1997-02-01

    A hybrid version of the Whipple Observatory's atmospheric Cherenkov imaging camera that permits observation during periods of bright moonlight is described. The hybrid camera combines a blue-light blocking filter with the standard Whipple imaging camera to reduce sensitivity to wavelengths greater than 360 nm. Data taken with this camera are found to be free from the effects of the moonlit night-sky after the application of simple off-line noise filtering. This camera has been used to successfully detect TeV gamma rays, in bright moon light, from both the Crab Nebula and the active galactic nucleus Markarian 421 at the 4.9σ and 3.9σ levels of statistical significance, respectively. The energy threshold of the camera is estimated to be 1.1 ( +0.6/-0.3) TeV from Monte Carlo simulations.

  12. Method for quantifying image quality in push-broom hyperspectral cameras

    NASA Astrophysics Data System (ADS)

    Hye, Gudrun; Lke, Trond; Fridman, Andrei

    2015-05-01

    We propose a method for measuring and quantifying image quality in push-broom hyperspectral cameras in terms of spatial misregistration caused by keystone and variations in the point spread function (PSF) across spectral channels, and image sharpness. The method is suitable for both traditional push-broom hyperspectral cameras where keystone is corrected in hardware and cameras where keystone is corrected in postprocessing, such as resampling and mixel cameras. We show how the measured camera performance can be presented graphically in an intuitive and easy to understand way, comprising both image sharpness and spatial misregistration in the same figure. For the misregistration, we suggest that both the mean standard deviation and the maximum value for each pixel are shown. We also suggest how the method could be expanded to quantify spectral misregistration caused by the smile effect and corresponding PSF variations. Finally, we have measured the performance of two HySpex SWIR 384 cameras using the suggested method. The method appears well suited for assessing camera quality and for comparing the performance of different hyperspectral imagers and could become the future standard for how to measure and quantify the image quality of push-broom hyperspectral cameras.

  13. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  14. Cloud level winds from the Venus Express Monitoring Camera imaging

    NASA Astrophysics Data System (ADS)

    Khatuntsev, I. V.; Patsaeva, M. V.; Titov, D. V.; Ignatiev, N. I.; Turin, A. V.; Limaye, S. S.; Markiewicz, W. J.; Almeida, M.; Roatsch, Th.; Moissl, R.

    2013-09-01

    Six years of continuous monitoring of Venus by European Space Agencys Venus Express orbiter provides an opportunity to study dynamics of the atmosphere our neighbor planet. Venus Monitoring Camera (VMC) on-board the orbiter has acquired the longest and the most complete so far set of ultra violet images of Venus. These images enable a study the cloud level circulation by tracking motion of the cloud features. The highly elliptical polar orbit of Venus Express provides optimal conditions for observations of the Southern hemisphere at varying spatial resolution. Out of the 2300 orbits of Venus Express over which the images used in the study cover about 10 Venus years. Out of these, we tracked cloud features in images obtained in 127 orbits by a manual cloud tracking technique and by a digital correlation method in 576 orbits. Total number of wind vectors derived in this work is 45,600 for the manual tracking and 391,600 for the digital method. This allowed us to determine the mean circulation, its long-term and diurnal trends, orbit-to-orbit variations and periodicities. We also present the first results of tracking features in the VMC near-IR images. In low latitudes the mean zonal wind at cloud tops (67 2 km following: Rossow, W.B., Del Genio, A.T., Eichler, T. [1990]. J. Atmos. Sci. 47, 2053-2084) is about 90 m/s with a maximum of about 100 m/s at 40-50S. Poleward of 50S the average zonal wind speed decreases with latitude. The corresponding atmospheric rotation period at cloud tops has a maximum of about 5 days at equator, decreases to approximately 3 days in middle latitudes and stays almost constant poleward from 50S. The mean poleward meridional wind slowly increases from zero value at the equator to about 10 m/s at 50S and then decreases to zero at the pole. The error of an individual measurement is 7.5-30 m/s. Wind speeds of 70-80 m/s were derived from near-IR images at low latitudes. The VMC observations indicate a long term trend for the zonal wind speed at low latitudes to increase from 85 m/s in the beginning of the mission to 110 m/s by the middle of 2012. VMC UV observations also showed significant short term variations of the mean flow. The velocity difference between consecutive orbits in the region of mid-latitude jet could reach 30 m/s that likely indicates vacillation of the mean flow between jet-like regime and quasi-solid body rotation at mid-latitudes. Fourier analysis revealed periodicities in the zonal circulation at low latitudes. Within the equatorial region, up to 35S, the zonal wind show an oscillation with a period of 4.1-5 days (4.83 days on average) that is close to the super-rotation period at the equator. The wave amplitude is 4-17 m/s and decreases with latitude, a feature of the Kelvin wave. The VMC observations showed a clear diurnal signature. A minimum in the zonal speed was found close to the noon (11-14 h) and maxima in the morning (8-9 h) and in the evening (16-17 h). The meridional component peaks in the early afternoon (13-15 h) at around 50S latitude. The minimum of the meridional component is located at low latitudes in the morning (8-11 h). The horizontal divergence of the mean cloud motions associated with the diurnal pattern suggests upwelling motions in the morning at low latitudes and downwelling flow in the afternoon in the cold collar region.

  15. Single-chip color imaging for UHDTV camera with a 33M-pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Funatsu, Ryohei; Yamashita, Takayuki; Mitani, Kohji; Nojiri, Yuji

    2011-03-01

    To develop an ultrahigh-definition television (UHDTV) camera-with a resolution 16 times higher than that of HDTV resolution and a frame rate of 60 Hz (progressive)-a compact and high-mobility UHDTV camera using a 33M-pixel CMOS image sensor to provide single-chip color imaging was developed. The sensor has a Bayer color-filter array (CFA), and its output signal format is compatible with the conventional UHDTV camera that uses four 8M-pixel image sensors. The theoretical MTF characteristics of the single-chip camera and a conventional four-8M-pixel CMOS camera were first calculated. A new technique for Bayer CFA demosaicing used for the single-chip UHDTV camera was then evaluated. Finally, a pick-up system for single-chip imaging with a 33M-pixel color CMOS image sensor was measured. The measurement results show that the resolution of this is equivalent to or surpasses that of the conventional four-8M-pixel CMOS camera. The possibility of a practical compact UHDTV camera that makes use of single-chip color imaging was thereby confirmed.

  16. D Hyperspectral Frame Imager Camera Data in Photogrammetric Mosaicking

    NASA Astrophysics Data System (ADS)

    Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J.; Soukkamäki, J.

    2013-08-01

    A new 2D hyperspectral frame camera system has been developed by VTT (Technical Research Center of Finland) and Rikola Ltd. It contains frame based and very light camera with RGB-NIR sensor and it is suitable for light weight and cost effective UAV planes. MosaicMill Ltd. has converted the camera data into proper format for photogrammetric processing, and camera's geometrical accuracy and stability are evaluated to guarantee required accuracies for end user applications. MosaicMill Ltd. has also applied its' EnsoMOSAIC technology to process hyperspectral data into orthomosaics. This article describes the main steps and results on applying hyperspectral sensor in orthomosaicking. The most promising results as well as challenges in agriculture and forestry are also described.

  17. Nonlinear color-image decomposition for image processing of a digital color camera

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi

    2009-01-01

    This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.

  18. Face acquisition camera design using the NV-IPM image generation tool

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.; Choi, Hee-Sue; Reynolds, Joseph P.

    2015-05-01

    In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.

  19. The advantages of using a Lucky Imaging camera for observations of microlensing events

    NASA Astrophysics Data System (ADS)

    Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus

    2016-03-01

    In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, For an Earth-mass planet in the resonance regime, where the detection probability in crowded-fields is smaller, lucky imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.

  20. Gamma camera image acquisition, display, and processing with the personal microcomputer

    SciTech Connect

    Lear, J.L.; Pratt, J.P.; Roberts, D.R.; Johnson, T.; Feyerabend, A. )

    1990-04-01

    The authors evaluated the potential of a microcomputer for direct acquisition, display, and processing of gamma camera images. Boards for analog-to-digital conversion and image zooming were designed, constructed, and interfaced to the Macintosh II (Apple Computer, Cupertino, Calif). Software was written for processing of single, gated, and time series images. The system was connected to gamma cameras, and its performance was compared with that of dedicated nuclear medicine computers. Data could be acquired from gamma cameras at rates exceeding 200,000 counts per second, with spatial resolution exceeding intrinsic camera resolution. Clinical analysis could be rapidly performed. This system performed better than most dedicated nuclear medicine computers with respect to speed of data acquisition and spatial resolution of images while maintaining full compatibility with the standard image display, hard-copy, and networking formats. It could replace such dedicated systems in the near future as software is refined.

  1. Mass movement slope streaks imaged by the Mars Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Sullivan, Robert; Thomas, Peter; Veverka, Joseph; Malin, Michael; Edgett, Kenneth S.

    2001-10-01

    Narrow, fan-shaped dark streaks on steep Martian slopes were originally observed in Viking Orbiter images, but a definitive explanation was not possible because of resolution limitations. Pictures acquired by the Mars Orbiter Camera (MOC) aboard the Mars Global Surveyor (MGS) spacecraft show innumerable examples of dark slope streaks distributed widely, but not uniformly, across the brighter equatorial regions, as well as individual details of these features that were not visible in Viking Orbiter data. Dark slope streaks (as well as much rarer bright slope streaks) represent one of the most widespread and easily recognized styles of mass movement currently affecting the Martian surface. New dark streaks have formed since Viking and even during the MGS mission, confirming earlier suppositions that higher contrast dark streaks are younger, and fade (brighten) with time. The darkest slope streaks represent ~10% contrast with surrounding slope materials. No small outcrops supplying dark material (or bright material, for bright streaks) have been found at streak apexes. Digitate downslope ends indicate slope streak formation involves a ground-hugging flow subject to deflection by minor topographic obstacles. The model we favor explains most dark slope streaks as scars from dust avalanches following oversteepening of air fall deposits. This process is analogous to terrestrial avalanches of oversteepened dry, loose snow which produce shallow avalanche scars with similar morphologies. Low angles of internal friction typically 10-30 for terrestrial loess and clay materials suggest that mass movement of (low-cohesion) Martian dusty air fall is possible on a wide range of gradients. Martian gravity, presumed low density of the air fall deposits, and thin (unresolved by MOC) failed layer depths imply extremely low cohesive strength at time of failure, consistent with expectations for an air fall deposit of dust particles. As speed increases during a dust avalanche, a growing fraction of the avalanching dust particles acquires sufficient kinetic energy to be lost to the atmosphere in suspension, limiting the momentum of the descending avalanche front. The equilibrium speed, where rate of mass lost to the atmosphere is balanced by mass continually entrained as the avalanche front descends, decreases with decreasing gradient. This mechanism explains observations from MOC images indicating slope streaks formed with little reserve kinetic energy for run-outs on to valley floors and explains why large distal deposits of displaced material are not found at downslope streak ends. The mass movement process of dark (and bright) slope streak formation through dust avalanches involves renewable sources of dust only, leaving underlying slope materials unaffected. Areas where dark and bright slope streaks currently form and fade in cycles are closely correlated with low thermal inertia and probably represent regions where dust currently is accumulating, not just residing.

  2. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  3. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  4. A New Lunar Atlas: Mapping the Moon with the Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Robinson, M. S.; Boyd, A.; Sato, H.

    2012-12-01

    The Lunar Reconnaissance Orbiter (LRO) spacecraft launched in June 2009 and began systematically mapping the lunar surface and providing a priceless dataset for the planetary science community and future mission planners. From 20 September 2009 to 11 December 2011, the spacecraft was in a nominal 50 km polar orbit, except for two one-month long periods when a series of spacecraft maneuvers enabled low attitude flyovers (as low as 22 km) of key exploration and scientifically interesting targets. One of the instruments, the Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) [1], captured nearly continuous synoptic views of the illuminated lunar surface. The WAC is a 7-band (321, 360, 415, 566, 604, 643, 689 nm) push frame imager with field of view of 60 in color mode and 90 in monochrome mode. This broad field of view enables the WAC to reimage nearly 50% (at the equator where the orbit tracks our spaced the furthest) of the terrain it imaged in the previous orbit. The visible bands of map projected WAC images have a pixel scale of 100 m, while UV bands have a pixel scale of 400 m due to 4x4 pixel on-chip binning that increases signal-to-noise. The nearly circular polar orbit and short (two hour) orbital periods enable seamless mosaics of broad areas of the surface with uniform lighting and resolution. In March of 2011, the LROC team released the first version of the global monochrome (643nm) morphologic map [2], which was comprised of 15,000 WAC images collected over three periods. With the over 130,000 WAC images collected while the spacecraft was in the 50 km orbit, a new set of mosaics are being produced by the LROC Team and will be released to the Planetary Data Systems. These new maps include an updated morphologic map with an improved set of images (limiting illumination variations and gores due to off-nadir observation of other instruments) and a new photometric correction derived from the LROC WAC dataset. In addition, a higher sun (lower incidence angle) mosaic will also be released. This map has minimal shadows and highlights albedo differences. In addition, seamless regional WAC mosaics acquired under multiple lighting geometries (Sunlight coming from the East, overhead, and West) will also be produced for key areas of interest. These new maps use the latest terrain model (LROC WAC GLD100) [3], updated spacecraft ephemeris provided by the LOLA team [4], and improved WAC distortion model [5] to provide accurate placement of each WAC pixel on the lunar surface. References: [1] Robinson et al. (2010) Space Sci. Rev. [2] Speyerer et al. (2011) LPSC, #2387. [3] Scholten et al. (2012) JGR. [4] Mazarico et al. (2012) J. of Geodesy [5] Speyerer et al. (2012) ISPRS Congress.

  5. A Prediction Method of TV Camera Image for Space Manual-control Rendezvous and Docking

    NASA Astrophysics Data System (ADS)

    Zhen, Huang; Qing, Yang; Wenrui, Wu

    Space manual-control rendezvous and docking (RVD) is a key technology for accomplishing the RVD mission in manned space engineering, especially when automatic control system is out of work. The pilot on chase spacecraft manipulates the hand-stick by the image of target spacecraft captured by TV camera. From the TV image, the relative position and attitude of chase and target spacecrafts can be shown. Therefore, the size, the position, the brightness and the shadow of the target on TV camera are key to guarantee the success of manual-control RVD. A method of predicting the on-orbit TV camera image at different relative positions and light conditions during the process of RVD is discussed. Firstly, the basic principle of capturing the image of cross drone on target spacecraft by TV camera is analyzed theoretically, based which the strategy of manual-control RVD is discussed in detail. Secondly, the relationship between the displayed size or position and the real relative distance of chase and target spacecrafts is presented, the brightness and reflection by the target spacecraft at different light conditions are decribed, the shadow on cross drone caused by the chase or target spacecraft is analyzed. Thirdly, a prediction method of on-orbit TV camera images at certain orbit and light condition is provided, and the characteristics of TV camera image during the RVD is analyzed. Finally, the size, the position, the brightness and the shadow of target spacecraft on TV camera image at typical orbit is simulated. The result, by comparing the simulated images with the real images captured by the TV camera on Shenzhou manned spaceship , shows that the prediction method is reasonable

  6. Single camera imaging system for color and near-infrared fluorescence image guided surgery

    PubMed Central

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-01-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 μM can be detected when the excitation irradiance is 3.92 mW/cm2 at an exposure time of 10 ms. PMID:25136502

  7. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images. PMID:18270103

  8. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  9. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  10. Development of gamma ray imaging cameras. Progress report for second year

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R&D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera`s orientation, while the brightness and ``color`` would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project`s two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R&D efforts for the third year effort. 8 refs.

  11. Myocardial Perfusion Imaging with a Solid State Camera: Simulation of a Very Low Dose Imaging Protocol

    PubMed Central

    Nakazato, Ryo; Berman, Daniel S.; Hayes, Sean W.; Fish, Mathews; Padgett, Richard; Xu, Yuan; Lemley, Mark; Baavour, Rafael; Roth, Nathaniel; Slomka, Piotr J.

    2012-01-01

    High sensitivity dedicated cardiac systems cameras provide an opportunity to lower injected doses for SPECT myocardial perfusion imaging (MPI), but the exact limits for lowering doses have not been determined. List mode data acquisition allows for reconstruction of various fractions of acquired counts, allowing a simulation of gradually lower administered dose. We aimed to determine the feasibility of very low dose MPI by exploring the minimal count level in the myocardium for accurate MPI. Methods Seventy nine patients were studied (mean body mass index 30.0 6.6, range 20.254.0 kg/m2) who underwent 1-day standard dose 99mTc-sestamibi exercise or adenosine rest/stress MPI for clinical indications employing a Cadmium Zinc Telluride dedicated cardiac camera. Imaging time was 14-min with 803 200 MBq (21.7 5.4mCi) of 99mTc injected at stress. To simulate clinical scans with lower dose at that imaging time, we reframed the list-mode raw data to have count fractions of the original scan. Accordingly, 6 stress equivalent datasets were reconstructed corresponding to each fraction of the original scan. Automated QPS/QGS software was used to quantify total perfusion deficit (TPD) and ejection fraction (EF) for all 553 datasets. Minimal acceptable count was determined based on previous report with repeatability of same-day same-injection Anger camera studies. Pearson correlation coefficients and SD of differences with TPD for all scans were calculated. Results The correlations of quantitative perfusion and function analysis were excellent for both global and regional analysis on all simulated low-counts scans (all r ?0.95, p<0.0001). Minimal acceptable count was determined to be 1.0 million counts for the left ventricular region. At this count level, SD of differences was 1.7% for TPD and 4.2% for EF. This count level would correspond to a 92.5 MBq (2.5 mCi) injected dose for the 14 min acquisition. Conclusion 1.0 million myocardial count images appear to be sufficient to maintain excellent agreement quantitative perfusion and function parameters as compared to those determined from 8.0 million count images. With a dedicated cardiac camera, these images could be obtained over 10 minutes with an effective radiation dose of less than 1 mSv without significant sacrifice in accuracy. PMID:23321457

  12. A 58 x 62 pixel Si:Ga array camera for 5 - 14 micron astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Folz, W. C.; Woods, L. A.; Wooldridge, J. B.

    1989-01-01

    A new infrared array camera system has been successfully applied to high background 5 - 14 micron astronomical imaging photometry observations, using a hybrid 58 x 62 pixel Si:Ga array detector. The off-axis reflective optical design incorporating a parabolic camera mirror, circular variable filter wheel, and cold aperture stop produces diffraction-limited images with negligible spatial distortion and minimum thermal background loading. The camera electronic system architecture is divided into three subsystems: (1) high-speed analog front end, including 2-channel preamp module, array address timing generator, bias power suppies, (2) two 16 bit, 3 microsec per conversion A/D converters interfaced to an arithmetic array processor, and (3) an LSI 11/73 camera control and data analysis computer. The background-limited observational noise performance of the camera at the NASA/IRTF telescope is NEFD (1 sigma) = 0.05 Jy/pixel min exp 1/2.

  13. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  14. Be Foil "Filter Knee Imaging" NSTX Plasma with Fast Soft X-ray Camera

    SciTech Connect

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-08-08

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28{sup o}) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip.

  15. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  16. Using focused plenoptic cameras for rich image capture.

    PubMed

    Georgiev, T; Lumsdaine, A; Chunev, G

    2011-01-01

    This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture. PMID:24807971

  17. DEFINITION OF AIRWAY COMPOSITION WITHIN GAMMA CAMERA IMAGES

    EPA Science Inventory

    The efficacies on inhaled pharmacologic drugs in the prophylaxis and treatment if airway diseases could be improved if particles were selectively directed to appropriate Sites. n the medical arena, planar gamma scintillation cameras may be employed to study factors affecting such...

  18. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  19. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  20. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  1. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Sims, G.; Toerne, K.; Rodenburg, K.; Croft, M.; Haugh, M. J.; Charest, M. R.; Romano, E. D.; Jacoby, K. D.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  2. Theta rotation and serial registration of light microscopical images using a novel camera rotating device.

    PubMed

    Duerstock, Bradley S; Cirillo, John; Rajwa, Bartek

    2010-06-01

    An electromechanical video camera coupler was developed to rotate a light microscope field of view (FOV) in real time without the need to physically rotate the stage or specimen. The device, referred to as the Camera Thetarotator, rotated microscopical views 240 degrees to assist microscopists to orient specimens within the FOV prior to image capture. The Camera Thetarotator eliminated the effort and artifacts created when rotating photomicrographs using conventional graphics software. The Camera Thetarotator could also be used to semimanually register a dataset of histological sections for three-dimensional (3D) reconstruction by superimposing the transparent, real-time FOV to the previously captured section in the series. When compared to Fourier-based software registration, alignment of serial sections using the Camera Thetarotator was more exact, resulting in more accurate 3D reconstructions with no computer-generated null space. When software-based registration was performed after prealigning sections with the Camera Thetarotator, registration was further enhanced. The Camera Thetarotator expanded microscopical viewing and digital photomicrography and provided a novel, accurate registration method for 3D reconstruction. The Camera Thetarotator would also be useful for performing automated microscopical functions necessary for telemicroscopy, high-throughput image acquisition and analysis, and other light microscopy applications. PMID:20233497

  3. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  4. Correction method for fisheye image based on the virtual small-field camera.

    PubMed

    Huang, Fuyu; Shen, Xueju; Wang, Qun; Zhou, Bing; Hu, Wengang; Shen, Hongbin; Li, Li

    2013-05-01

    A distortion correction method for a fisheye image is proposed based on the virtual small-field (SF) camera. The correction experiment is carried out, and a comparison is made between the proposed method and the conventional global correction method. From the experimental results, the corrected image by this method satisfies the law of perspective projection, and the image looks as if it was captured by an SF camera with the optical axis pointing at the corrected center. This method eliminates the phenomena of center compression, edge stretch, and field loss, and the image character is more obvious, which benefits the afterward target detection and information extraction. PMID:23632495

  5. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  6. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  7. Development of an atmospheric Cherenkov imaging camera for the CANGAROO-III experiment

    NASA Astrophysics Data System (ADS)

    Kabuki, S.; Tsuchiya, K.; Okumura, K.; Enomoto, R.; Uchida, T.; Tsunoo, H.; Hayashi, Shin.; Hayashi, Sei.; Kajino, F.; Maeshiro, A.; Tada, I.; Itoh, C.; Asahara, A.; Bicknell, G. V.; Clay, R. W.; Edwards, P. G.; Gunji, S.; Hara, S.; Hara, T.; Hattori, T.; Katagiri, H.; Kawachi, A.; Kifune, T.; Kubo, H.; Kushida, J.; Matsubara, Y.; Mizumoto, Y.; Mori, M.; Moro, H.; Muraishi, H.; Muraki, Y.; Naito, T.; Nakase, T.; Nishida, D.; Nishijima, K.; Ohishi, M.; Patterson, J. R.; Protheroe, R. J.; Sakurazawa, K.; Swaby, D. L.; Tanimori, T.; Tokanai, F.; Watanabe, A.; Watanabe, S.; Yanagita, S.; Yoshida, T.; Yoshikoshi, T.

    2003-03-01

    A Cherenkov imaging camera for the CANGAROO-III experiment has been developed for observations of gamma-ray-induced air showers at energies from 1011 to 1014eV. The camera consists of 427pixels, arranged in a hexagonal shape at /0.17 intervals, each of which is a 3/4-in. diameter photomultiplier module with a Winston-cone-shaped light guide. The camera was designed to have a large dynamic range of signal linearity, a wider field of view, and an improvement in photon-collection efficiency compared with the CANGAROO-II camera. The camera, and a number of the calibration experiments made to test its performance, are described in detail in this paper.

  8. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian (Livermore, CA); Vetter, Kai M. (Alameda, CA)

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  9. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  10. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  11. High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications

    SciTech Connect

    Yates, G.J.; King, N.S.P.

    1994-08-01

    This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.

  12. Design and evaluation of gamma imaging systems of Compton and hybrid cameras

    NASA Astrophysics Data System (ADS)

    Feng, Yuxin

    Systems for imaging and spectroscopy of gamma-ray emission have been widely applied in environment and medicine applications. The superior performance of LaBr3:Ce detectors established them as excellent candidates for imaging and spectroscopy of gamma-rays. In this work, Compton cameras and hybrid cameras with a two-plane array of LaBr3:Ce detectors, one for the scattering and one for the absorbing detector arrays were designed and investigated. The feasibility of using LaBr3 in Compton cameras was evaluated with a bench top experiment in which two LaBr3:Ce detectors were arranged to mimic a Compton camera with one scattering and eight absorbing detectors. In the hybrid system the combination of the imaging methods of Compton and coded aperture cameras enables the system to cover the energy range of approximately 100 keV to a few MeV with good efficiency and angular resolution. The imaging performance of the hybrid imaging system was evaluated via Monte Carlo simulations. The image reconstruction algorithms of direct back-projections were applied for instant or real time imaging applications; this imaging system is capable of achieving an angular resolution of approximately 0.3 radians (17°). With image reconstruction algorithms of Expectation Maximized Likelihood, the image quality was improved to approximately 0.1 radians (or 6°). For medical applications in proton therapy, a Compton camera system to image the gamma-ray emission during treatment was designed and investigated. Gamma rays and X-rays emitted during treatment illustrate the energy deposition along the path of the proton beams and provide an opportunity for online dose verification. This Compton camera is designed to be capable of imaging gamma rays in 3D and is one of the candidates for imaging gamma emission during the treatment of proton therapy beside of the approach of positron emission tomography. In order to meet the requirement for spatial resolution of approximately 5 mm or less to meaningfully verify the dose via imaging gamma rays of 511 keV to 2 MeV, position sensing techniques with pixilated LaBr3 (Ce) crystal were applied in each detector. The pixilated LaBr3 (Ce) crystal was used in both the scattering and absorbing detectors. Image reconstruction algorithms of OS-EML were applied to obtain 3D images.

  13. Design Considerations Of A Compton Camera For Low Energy Medical Imaging

    SciTech Connect

    Harkness, L. J.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Grint, A. N.; Judson, D. S.; Nolan, P. J.; Oxley, D. C.; Lazarus, I.; Simpson, J.

    2009-12-02

    Development of a Compton camera for low energy medical imaging applications is underway. The ProSPECTus project aims to utilize position sensitive detectors to generate high quality images using electronic collimation. This method has the potential to significantly increase the imaging efficiency compared with mechanically collimated SPECT systems, a highly desirable improvement on clinical systems. Design considerations encompass the geometrical optimisation and evaluation of image quality from the system which is to be built and assessed.

  14. The Lunar Student Imaging Project (LSIP): Bringing the Excitement of Lunar Exploration to Students Using LRO Mission Data

    NASA Astrophysics Data System (ADS)

    Taylor, W. L.; Roberts, D.; Burnham, R.; Robinson, M. S.

    2009-12-01

    In June 2009, NASA launched the Lunar Reconnaissance Orbiter (LRO) - the first mission in NASA's Vision for Space Exploration, a plan to return to the Moon and then to travel to Mars and beyond. LRO is equipped with seven instruments including the Lunar Reconnaissance Orbiter Camera (LROC), a system of two narrow-angle cameras and one wide-angle camera, controlled by scientists in the School of Earth and Space Exploration at Arizona State University. The orbiter will have a one-year primary mission in a 50 km polar orbit. The measurements from LROC will uncover much-needed information about potential landing sites and will help generate a meter scale map of the lunar surface. With support from NASA Goddard Space Flight Center, the LROC Science Operations Center and the ASU Mars Education Program, have partnered to develop an inquiry-based student program, the Lunar Student Imaging Project (LSIP). Based on the nationally recognized, Mars Student Imaging Project (MSIP), LSIP uses cutting-edge NASA content and remote sensing data to involve students in authentic lunar exploration. This program offers students (grades 5-14) immersive experiences where they can: 1) target images of the lunar surface, 2) interact with NASA planetary scientists, mission engineers and educators, and 3) gain access to NASA curricula and materials developed to enhance STEM learning. Using a project based learning model, students drive their own research and learn first hand what its like to do real planetary science. The LSIP curriculum contains a resource manual and program guide (including lunar feature identification charts, classroom posters, and lunar exploration time line) and a series of activities covering image analysis, relative age dating and planetary comparisons. LSIP will be based upon the well-tested MSIP model, and will encompass onsite as well as distance learning components.

  15. The Nimbus image dissector camera system - An evaluation of its meteorological applications.

    NASA Technical Reports Server (NTRS)

    Sabatini, R. R.

    1971-01-01

    Brief description of the electronics and operation of the Nimbus image dissector camera system (IDCS). The geometry and distortions of the IDCS are compared to the conventional AVCS camera on board the operational ITOS and ESSA satellites. The unique scanning of the IDCS provides for little distortion of the image, making it feasible to use a strip grid for the IDCS received in real time by local APT stations. The dynamic range of the camera favors the white end (high reflectance) of the gray scale. Thus, the camera is good for detecting cloud structure and ice features through brightness changes. Examples of cloud features, ice, and snow-covered land are presented. Land features, on the other hand, show little contrast. The 2600 x 2600 km coverage by the IDCS is adequate for the early detection of weather systems which may affect the local area. An example of IDCS coverage obtained by an APT station in midlatitudes is presented.

  16. Optical characterization of UV multispectral imaging cameras for SO2 plume measurements

    NASA Astrophysics Data System (ADS)

    Stebel, K.; Prata, F.; Dauge, F.; Durant, A.; Amigo, A.,

    2012-04-01

    Only a few years ago spectral imaging cameras for SO2 plume monitoring were developed for remote sensing of volcanic plumes. We describe the development from a first camera using a single filter in the absorption band of SO2 to more advanced systems using several filters and an integrated spectrometer. The first system was based on the Hamamatsu C8484 UV camera (1344 x 1024 pixels) with high quantum efficiency in the UV region from 280 nm onward. At the heart of the second UV camera system, EnviCam, is a cooled Alta U47 camera, equipped with two on-band (310 and 315 nm) and two off-band (325 and 330 nm) filters. The third system utilizes again the uncooled Hamamatsu camera for faster sampling (~10 Hz) and a four-position filter-wheel equipped with two 10 nm filters centered at 310 and 330 nm, a UV broadband view and a blackened plate for dark-current measurement. Both cameras have been tested with lenses with different focal lengths. A co-aligned spectrometer provides a ~0.3nm resolution spectrum within the field-of-view of the camera. We describe the ground-based imaging cameras systems developed and utilized at our Institute. Custom made cylindrical quartz calibration cells with 50 mm diameter, to cover the entire field of view of the camera optics, are filled with various amounts of gaseous SO2 (typically between 100 and 1500 ppmm). They are used for calibration and characterization of the cameras in the laboratory. We report about the procedures for monitoring and analyzing SO2 path-concentration and fluxes. This includes a comparison of the calibration in the atmosphere using the SO2 cells versus the SO2 retrieval from the integrated spectrometer. The first UV cameras have been used to monitor ship emissions (Ny-lesund, Svalbard and Genova, Italy). The second generation of cameras were first tested for industrial stack monitoring during a field campaign close to the Rovinari (Romania) power plant in September 2010, revealing very high SO2 emissions (> 1000 ppmm). The second generation cameras are now used by students from several universities in Romania. The newest system has been tested for volcanic plume monitoring at Turrialba, Costa Rica in January, 2011, at Merapi volcani, Indonesia in February 2011, at Lascar volcano in Chile in July 2011 and at Etna/Stromboli (Italy) in November 2011. Retrievals from some of these campaigns will be presented.

  17. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  18. Microchannel plate pinhole camera for 20 to 100 keV x-ray imaging

    SciTech Connect

    Wang, C.L.; Leipelt, G.R.; Nilson, D.G.

    1984-10-03

    We present the design and construction of a sensitive pinhole camera for imaging suprathermal x-rays. Our device is a pinhole camera consisting of four filtered pinholes and microchannel plate electron multiplier for x-ray detection and signal amplification. We report successful imaging of 20, 45, 70, and 100 keV x-ray emissions from the fusion targets at our Novette laser facility. Such imaging reveals features of the transport of hot electrons and provides views deep inside the target.

  19. Use of an infrared-imaging camera to obtain convective heating distributions.

    NASA Technical Reports Server (NTRS)

    Compton, D. L.

    1972-01-01

    The IR emission from the surface of a wind-tunnel model is determined as a function of time with the aid of an infrared-sensitive imaging camera. Prior calibration of the IR camera relates the emission to the surface temperature of the model. The time history of the surface temperature is then related to the heating rate by standard techniques. The output of the camera is recorded in analog form, digitized, and processed by a computer. In addition, real-time visual displays of the IR emissions are obtained as pictures on an oscilloscope screen.

  20. Performance of the Atmospheric Cherenkov Imaging Camera for the CANGAROO-III Experiment

    NASA Astrophysics Data System (ADS)

    Kabuki, S.; Uchida, T.; Kurosaka, R.; Okumura, K.; Enomoto, R.; Asahara, A.; Bicknell, G. V.; Clay, R. W.; Doi, Y.; Edwards, P. G.; Gunji, S.; Hara, S.; Hara, T.; Hattori, T.; Itoh, C.; Kajino, F.; Katagiri, H.; Kawachi, A.; Kifune, T.; Ksenofontov, L. T.; Kubo, H.; Kurihara, T.; Kushida, J.; Matsubara, Y.; Miyashita, Y.; Mizumoto, Y.; Mori, M.; Moro, H.; Muraishi, H.; Muraki, Y.; Naito, T.; Nakase, T.; Nishida, D.; Nishijima, K.; Ohishi, M.; Patterson, J. R.; Protheore, R. J.; Sakamoto, N.; Sakurazawa, K.; Swaby, D. L.; Tanimori, T.; Tanimura, H.; Thornton, G.; Tokanai, F.; Tsuchiya, K.; Watanabe, S.; Yamaoka, T.; Yanagita, S.; Yoshida, T.; Yoshikoshi, T.

    2003-07-01

    A Cherenkov imaging camera was developed and installed in the second telescope of the CANGAROO-I I I project for observing gamma-rays having energies above 1011 eV. The camera consists of 427 pixels, arranged in a hexagonal shape at 0.17 intervals, each of which is a 3/4-inch diameter photomultiplier with a Winston-cone-shaped light guide. The camera discussed in this paper offers a wider field of view, a better photon collection efficiency, and a larger dynamic

  1. Robust extraction of image correspondences exploiting the image scene geometry and approximate camera orientation

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Remondino, F.; Menna, F.; Gerke, M.; Vosselman, G.

    2013-02-01

    Image-based modeling techniques are an important tool for producing 3D models in a practical and cost effective manner. Accurate image-based models can be created as long as one can retrieve precise image calibration and orientation information which is nowadays performed automatically in computer vision and photogrammetry. The first step for orientation is to have sufficient correspondences across the captured images. Keypoint descriptors like SIFT or SURF are a successful approach for finding these correspondences. The extraction of precise image correspondences is crucial for the subsequent image orientation and image matching steps. Indeed there are still many challenges especially with wide-baseline image configuration. After the extraction of a sufficient and reliable set of image correspondences, a bundle adjustment is used to retrieve the image orientation parameters. In this paper, a brief description of our previous work on automatic camera network design is initially reported. This semi-automatic procedure results in wide-baseline high resolution images covering an object of interest, and including approximations of image orientations, a rough 3D object geometry and a matching matrix indicating for each image its matching mates. The main part of this paper will describe the subsequent image matching where the pre-knowledge on the image orientations and the pre-created rough 3D model of the study object is exploited. Ultimately the matching information retrieved during that step will be used for a precise bundle block adjustment. Since we defined the initial image orientation in the design of the network, we can compute the matching matrix prior to image matching of high resolution images. For each image involved in several pairs that is defined in the matching matrix, we detect the corners or keypoints and then transform them into the matching images by using the designed orientation and initial 3D model. Moreover, a window is defined for each corner and its initial correspondence in the matching images. A SIFT or SURF matching is implemented between every matching window to find the homologous points. This is followed by Least Square Matching LSM to refine the correspondences for a sub-pixel localization and to avoid inaccurate matches. Image matching is followed by a bundle adjustment to orient the images automatically to finally have a sparse 3D model. We used the commercial software Photomodeler Scanner 2010 for implementing the bundle adjustment since it reports a number of accuracy indices which are necessary for the evaluation purposes. The experimental test of comparing the automated image matching of four pre-designed streopairs shows that our approach can provide a high accuracy and effective orientation when compared to the results of commercial and open source software which does not exploit the pre-knowledge about the scene.

  2. A high-resolution airborne four-camera imaging system for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  3. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  4. Intial synchroscan streak camera imaging at the A0 photoinjector

    SciTech Connect

    Lumpkin, A.H.; Ruan, J.; /Fermilab

    2008-04-01

    At the Fermilab A0 photoinjector facility, bunch-length measurements of the laser micropulse and the e-beam micropulse have been done in the past with a single-sweep module of the Hamamatsu C5680 streak camera with an intrinsic shot-to-shot trigger jitter of 10 to 20 ps. We have upgraded the camera system with the synchroscan module tuned to 81.25 MHz to provide synchronous summing capability with less than 1.5-ps FWHM trigger jitter and a phase-locked delay box to provide phase stability of {approx}1 ps over 10s of minutes. This allowed us to measure both the UV laser pulse train at 244 nm and the e-beam via optical transition radiation (OTR). Due to the low electron beam energies and OTR signals, we typically summed over 50 micropulses with 1 nC per micropulse. We also did electron beam bunch length vs. micropulse charge measurements to identify a significant e-beam micropulse elongation from 10 to 30 ps (FWHM) for charges from 1 to 4.6 nC. This effect is attributed to space-charge effects in the PC gun as reproduced by ASTRA calculations. Chromatic temporal dispersion effects in the optics were also characterized and will be reported.

  5. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    SciTech Connect

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5/sup 0/, 20/sup 0/, and 60/sup 0/ field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented.

  6. Methods for a fusion of optical coherence tomography and stereo camera image data

    NASA Astrophysics Data System (ADS)

    Bergmeier, Jan; Kundrat, Dennis; Schoob, Andreas; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-03-01

    This work investigates combination of Optical Coherence Tomography and two cameras, observing a microscopic scene. Stereo vision provides realistic images, but is limited in terms of penetration depth. Optical Coherence Tomography (OCT) enables access to subcutaneous structures, but 3D-OCT volume data do not give the surgeon a familiar view. The extension of the stereo camera setup with OCT imaging combines the benefits of both modalities. In order to provide the surgeon with a convenient integration of OCT into the vision interface, we present an automated image processing analysis of OCT and stereo camera data as well as combined imaging as augmented reality visualization. Therefore, we care about OCT image noise, perform segmentation as well as develop proper registration objects and methods. The registration between stereo camera and OCT results in a Root Mean Square error of 284 μm as average of five measurements. The presented methods are fundamental for fusion of both imaging modalities. Augmented reality is shown as application of the results. Further developments lead to fused visualization of subcutaneous structures, as information of OCT images, into stereo vision.

  7. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  8. Demonstration of three-dimensional imaging based on handheld Compton camera

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Taya, T.; Kabuki, S.

    2015-11-01

    Compton cameras are potential detectors that are capable of performing measurements across a wide energy range for medical imaging applications, such as in nuclear medicine and ion beam therapy. In previous work, we developed a handheld Compton camera to identify environmental radiation hotspots. This camera consists of a 3D position-sensitive scintillator array and multi-pixel photon counter arrays. In this work, we reconstructed the 3D image of a source via list-mode maximum likelihood expectation maximization and demonstrated the imaging performance of the handheld Compton camera. Based on both the simulation and the experiments, we confirmed that multi-angle data acquisition of the imaging region significantly improved the spatial resolution of the reconstructed image in the direction vertical to the detector. The experimental spatial resolutions in the X, Y, and Z directions at the center of the imaging region were 6.81 mm 0.13 mm, 6.52 mm 0.07 mm and 6.71 mm 0.11 mm (FWHM), respectively. Results of multi-angle data acquisition show the potential of reconstructing 3D source images.

  9. Enhancing swimming pool safety by the use of range-imaging cameras

    NASA Astrophysics Data System (ADS)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  10. Imaging performance of a multiwire proportional-chamber positron camera

    SciTech Connect

    Perez-Mandez, V.; Del Guerra, A.; Nelson, W.R.; Tam, K.C.

    1982-08-01

    A new design - fully three-dimensional - Positron Camera is presented, made of six MultiWire Proportional Chamber modules arranged to form the lateral surface of a hexagonal prism. A true coincidence rate of 56000 c/s is expected with an equal accidental rate for a 400 ..mu..Ci activity uniformly distributed in a approx. 3 l water phantom. A detailed Monte Carlo program has been used to investigate the dependence of the spatial resolution on the geometrical and physical parameters. A spatial resolution of 4.8 mm FWHM has been obtained for a /sup 18/F point-like source in a 10 cm radius water phantom. The main properties of the limited angle reconstruction algorithms are described in relation to the proposed detector geometry.

  11. Three-dimensional imaging of carbonyl sulfide and ethyl iodide photodissociation using the pixel imaging mass spectrometry camera

    NASA Astrophysics Data System (ADS)

    Amini, K.; Blake, S.; Brouard, M.; Burt, M. B.; Halford, E.; Lauer, A.; Slater, C. S.; Lee, J. W. L.; Vallance, C.

    2015-10-01

    The Pixel Imaging Mass Spectrometry (PImMS) camera is used in proof-of-principle three-dimensional imaging experiments on the photodissociation of carbonyl sulfide and ethyl iodide at wavelengths around 230 nm and 245 nm, respectively. Coupling the PImMS camera with DC-sliced velocity-map imaging allows the complete three-dimensional Newton sphere of photofragment ions to be recorded on each laser pump-probe cycle with a timing precision of 12.5 ns, yielding velocity resolutions along the time-of-flight axis of around 6%-9% in the applications presented.

  12. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  13. A 5-18 micron array camera for high-background astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.; Folz, Walter C.; Woods, Lawrence A.; Varosi, Frank

    1992-01-01

    A new infrared array camera system using a Hughes/SBRC 58 x 62 pixel hybrid Si:Ga array detector has been successfully applied to high-background 5-18-micron astronomical imaging observations. The off-axis reflective optical system minimizes thermal background loading and produces diffraction-limited images with negligible spatial distortion. The noise equivalent flux density (NEFD) of the camera at 10 microns on the 3.0-m NASA/Infrared Telescope Facility with broadband interference filters and 0.26 arcsec pixel is NEFD = 0.01 Jy/sq rt min per pixel (1sigma), and it operates at a frame rate of 30 Hz with no compromise in observational efficiency. The electronic and optical design of the camera, its photometric characteristics, examples of observational results, and techniques for successful array imaging in a high- background astronomical application are discussed.

  14. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  15. Vehicle occupancy detection camera position optimization using design of experiments and standard image references

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Hoover, Martin; Rabbani, Mojgan

    2013-03-01

    Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of the vehicle. The DOE-SIR method was exercised for determining the optimal camera position and orientation for viewing vehicle rear seats over a variety of vehicle types. The resulting camera geometry was used on public roadway image capture resulting in over 95% acceptable rear seat images for human viewing.

  16. Effect of camera temperature variations on stereo-digital image correlation measurements.

    PubMed

    Pan, Bing; Shi, Wentao; Lubineau, Gilles

    2015-12-01

    In laboratory and especially non-laboratory stereo-digital image correlation (stereo-DIC) applications, the extrinsic and intrinsic parameters of the cameras used in the system may change slightly due to the camera warm-up effect and possible variations in ambient temperature. Because these camera parameters are generally calibrated once prior to measurements and considered to be unaltered during the whole measurement period, the changes in these parameters unavoidably induce displacement/strain errors. In this study, the effect of temperature variations on stereo-DIC measurements is investigated experimentally. To quantify the errors associated with camera or ambient temperature changes, surface displacements and strains of a stationary optical quartz glass plate with near-zero thermal expansion were continuously measured using a regular stereo-DIC system. The results confirm that (1) temperature variations in the cameras and ambient environment have a considerable influence on the displacements and strains measured by stereo-DIC due to the slightly altered extrinsic and intrinsic camera parameters; and (2) the corresponding displacement and strain errors correlate with temperature changes. For the specific stereo-DIC configuration used in this work, the temperature-induced strain errors were estimated to be approximately 30-50 με/°C. To minimize the adverse effect of camera temperature variations on stereo-DIC measurements, two simple but effective solutions are suggested. PMID:26836665

  17. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  18. Correction of rolling wheel images captured by a linear array camera.

    PubMed

    Xu, Jiayuan; Sun, Ran; Tian, Yupeng; Xie, Qi; Yang, Ying; Liu, Hongdan; Cao, Lei

    2015-11-20

    As a critical part of the train, wheels affect railway transport security to a large extent. This paper introduces an online method to detect the wheel tread of a train. The wheel tread images are collected by industrial linear array charge coupled device (CCD) cameras when the train is moving at a low velocity. This study defines the positioning of the cameras and determines how to select other parameters such as the horizontal angle and the scanning range. The deformation of the wheel tread image can be calculated based on these parameters and corrected by gray interpolation. PMID:26836530

  19. Development of a handheld fluorescence imaging camera for intraoperative sentinel lymph node mapping.

    PubMed

    Szyc, ?ukasz; Bonifer, Stefanie; Walter, Alfred; Jagemann, Uwe; Grosenick, Dirk; Macdonald, Rainer

    2015-05-01

    We present a compact fluorescence imaging system developed for real-time sentinel lymph node mapping. The device uses two near-infrared wavelengths to record fluorescence and anatomical images with a single charge-coupled device camera. Experiments on lymph node and tissue phantoms confirmed that the amount of dye in superficial lymph nodes can be better estimated due to the absorption correction procedure integrated in our device. Because of the camera head's small size and low weight, all accessible regions of tissue can be reached without the need for any adjustments. PMID:25585232

  20. Development of a handheld fluorescence imaging camera for intraoperative sentinel lymph node mapping

    NASA Astrophysics Data System (ADS)

    Szyc, ?ukasz; Bonifer, Stefanie; Walter, Alfred; Jagemann, Uwe; Grosenick, Dirk; Macdonald, Rainer

    2015-05-01

    We present a compact fluorescence imaging system developed for real-time sentinel lymph node mapping. The device uses two near-infrared wavelengths to record fluorescence and anatomical images with a single charge-coupled device camera. Experiments on lymph node and tissue phantoms confirmed that the amount of dye in superficial lymph nodes can be better estimated due to the absorption correction procedure integrated in our device. Because of the camera head's small size and low weight, all accessible regions of tissue can be reached without the need for any adjustments.

  1. Coded-Aperture Compton Camera for Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.; Williams, John G.

    2016-02-01

    A novel gamma-ray imaging system is demonstrated, by means of Monte Carlo simulation. Previous designs have used either a coded aperture or Compton scattering system to image a gamma-ray source. By taking advantage of characteristics of each of these systems a new design can be implemented that does not require a pixelated stopping detector. Use of the system is illustrated for a simulated radiation survey in a decontamination and decommissioning operation.

  2. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  3. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    NASA Astrophysics Data System (ADS)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  4. Three-dimensional camera capturing 360 directional image for natural three-dimensional display

    NASA Astrophysics Data System (ADS)

    Hanuma, Hisaki; Takaki, Yasuhiro

    2005-11-01

    Natural three-dimensional images can be produced by displaying a large number of directional images with directional rays. Directional images are orthographic projections of a three-dimensional object and are displayed with nearly parallel rays. We have already constructed 64-directional, 72-directional, and 128-directional natural three-dimensional displays whose angle sampling pitches of horizontal ray direction are 0.34, 0,38, and 0.25, respectively. In this study we constructed a rotating camera system to capture 360 three-dimensional information of an actual object. An object is located at the center of rotation and a camera mounted at the end of an arm is rotated around an object. A large number of images are captured from different horizontal directions with a small rotation angle interval. Because captured images are perspective projections of an object, directional images are generated by interpolating the captured images. The 360 directional image consists of 1,059, 947, and 1,565 directional images corresponding to the three different displays. When the number of captured images is about ~ 4,000, the directional images can be generated without the image interpolation so that correct directional images are obtained. The degradation of the generated 360 directional image depending on the number of captured images is evaluated. The results show that the PSNR is higher than 35 dB when more than 400 images are captured. With the 360 directional image, the three-dimensional images can be interactively rotated on the three-dimensional display. The data sizes of the 360 directional images are 233 MB, 347 MB, and 344 MB, respectively. Because the directional images for adjacent horizontal directions are very similar, 360 directional image can be compressed using the conventional movie compression algorithms. We used H.264 CODEC and achieved the compression ratio 1.5 % with PSNR > 35 dB.

  5. Online gamma-camera imaging of 103Pd seeds (OGIPS) for permanent breast seed implantation

    NASA Astrophysics Data System (ADS)

    Ravi, Ananth; Caldwell, Curtis B.; Keller, Brian M.; Reznik, Alla; Pignol, Jean-Philippe

    2007-09-01

    Permanent brachytherapy seed implantation is being investigated as a mode of accelerated partial breast irradiation for early stage breast cancer patients. Currently, the seeds are poorly visualized during the procedure making it difficult to perform a real-time correction of the implantation if required. The objective was to determine if a customized gamma-camera can accurately localize the seeds during implantation. Monte Carlo simulations of a CZT based gamma-camera were used to assess whether images of suitable quality could be derived by detecting the 21 keV photons emitted from 74 MBq 103Pd brachytherapy seeds. A hexagonal parallel hole collimator with a hole length of 38 mm, hole diameter of 1.2 mm and 0.2 mm septa, was modeled. The design of the gamma-camera was evaluated on a realistic model of the breast and three layers of the seed distribution (55 seeds) based on a pre-implantation CT treatment plan. The Monte Carlo simulations showed that the gamma-camera was able to localize the seeds with a maximum error of 2.0 mm, using only two views and 20 s of imaging. A gamma-camera can potentially be used as an intra-procedural image guidance system for quality assurance for permanent breast seed implantation.

  6. A mobile phone-based retinal camera for portable wide field imaging.

    PubMed

    Maamari, Robi N; Keenan, Jeremy D; Fletcher, Daniel A; Margolis, Todd P

    2014-04-01

    Digital fundus imaging is used extensively in the diagnosis, monitoring and management of many retinal diseases. Access to fundus photography is often limited by patient morbidity, high equipment cost and shortage of trained personnel. Advancements in telemedicine methods and the development of portable fundus cameras have increased the accessibility of retinal imaging, but most of these approaches rely on separate computers for viewing and transmission of fundus images. We describe a novel portable handheld smartphone-based retinal camera capable of capturing high-quality, wide field fundus images. The use of the mobile phone platform creates a fully embedded system capable of acquisition, storage and analysis of fundus images that can be directly transmitted from the phone via the wireless telecommunication system for remote evaluation. PMID:24344230

  7. Shading correction of camera captured document image with depth map information

    NASA Astrophysics Data System (ADS)

    Wu, Chyuan-Tyng; Allebach, Jan P.

    2015-01-01

    Camera modules have become more popular in consumer electronics and office products. As a consequence, people have many opportunities to use a camera-based device to record a hardcopy document in their daily lives. However, it is easy to let undesired shading into the captured document image through the camera. Sometimes, this non-uniformity may degrade the readability of the contents. In order to mitigate this artifact, some solutions have been developed. But most of them are only suitable for particular types of documents. In this paper, we introduce a content-independent and shape-independent method that will lessen the shading effects in captured document images. We want to reconstruct the image such that the result will look like a document image captured under a uniform lighting source. Our method utilizes the 3D depth map of the document surface and a look-up table strategy. We will first discuss the model and the assumptions that we used for the approach. Then, the process of creating and utilizing the look-up table will be described in the paper. We implement this algorithm with our prototype 3D scanner, which also uses a camera module to capture a 2D image of the object. Some experimental results will be presented to show the effectiveness of our method. Both flat and curved surface document examples will be included.

  8. The image pretreatment based on the FPGA inside digital CCD camera

    NASA Astrophysics Data System (ADS)

    Tian, Rui; Liu, Yan-ying

    2009-07-01

    In a space project, a digital CCD camera which can image more clearly in the 1 Lux light environment has been asked to design . The CCD sensor ICX285AL produced by SONY Co.Ltd has been used in the CCD camera. The FPGA (Field Programmable Gate Array) chip XQR2V1000 has been used as a timing generator and a signal processor inside the CCD camera. But in the low-light environment, two kinds of random noise become apparent because of the improving of CCD camera's variable gain, one is dark current noise in the image background, the other is vertical transfer noise. The real time method for eliminating noise based on FPGA inside the CCD camera would be introduced. The causes and characteristics of the random noise have been analyzed. First, several ideas for eliminating dark current noise had been motioned; then they were emulated by VC++ in order to compare their speed and effect; Gauss filter has been chosen because of the filtering effect. The vertical transfer vertical noise has the character that the vertical noise points have regular ordinate in the image two-dimensional coordinates; and the performance of the noise is fixed, the gray value of the noise points is 16-20 less than the surrounding pixels. According to these characters, local median filter has been used to clear up the vertical noise. Finally, these algorithms had been transplanted into the FPGA chip inside the CCD camera. A large number of experiments had proved that the pretreatment has better real-time features. The pretreatment makes the digital CCD camera improve the signal-to-noise ratio of 3-5dB in the low-light environment.

  9. GNSS Carrier Phase Integer Ambiguity Resolution with Camera and Satellite images

    NASA Astrophysics Data System (ADS)

    Henkel, Patrick

    2015-04-01

    Ambiguity Resolution is the key to high precision position and attitude determination with GNSS. However, ambiguity resolution of kinematic receivers becomes challenging in environments with substantial multipath, limited satellite availability and erroneous cycle slip corrections. There is a need for other sensors, e.g. inertial sensors that allow an independent prediction of the position. The change of the predicted position over time can then be used for cycle slip detection and correction. In this paper, we provide a method to improve the initial ambiguity resolution for RTK and PPP with vision-based position information. Camera images are correlated with geo-referenced aerial/ satellite images to obtain an independent absolute position information. This absolute position information is then coupled with the GNSS and INS measurements in an extended Kalman filter to estimate the position, velocity, acceleration, attitude, angular rates, code multipath and biases of the accelerometers and gyroscopes. The camera and satellite images are matched based on some characteristic image points (e.g. corners of street markers). We extract these characteristic image points from the camera images by performing the following steps: An inverse mapping (homogenous projection) is applied to transform the camera images from the driver's perspective to bird view. Subsequently, we detect the street markers by performing (a) a color transformation and reduction with adaptive brightness correction to focus on relevant features, (b) a subsequent morphological operation to enhance the structure recognition, (c) an edge and corner detection to extract feature points, and (d) a point matching of the corner points with a template to recognize the street markers. We verified the proposed method with two low-cost u-blox LEA 6T GPS receivers, the MPU9150 from Invensense, the ASCOS RTK corrections and a PointGrey camera. The results show very precise and seamless position and attitude estimates in an urban environment with substantial multipath.

  10. Image analysis techniques to estimate river discharge using time-lapse cameras in remote locations

    NASA Astrophysics Data System (ADS)

    Young, David S.; Hart, Jane K.; Martinez, Kirk

    2015-03-01

    Cameras have the potential to provide new data streams for environmental science. Improvements in image quality, power consumption and image processing algorithms mean that it is now possible to test camera-based sensing in real-world scenarios. This paper presents an 8-month trial of a camera to monitor discharge in a glacial river, in a situation where this would be difficult to achieve using methods requiring sensors in or close to the river, or human intervention during the measurement period. The results indicate diurnal changes in discharge throughout the year, the importance of subglacial winter water storage, and rapid switching from a "distributed" winter system to a "channelised" summer drainage system in May. They show that discharge changes can be measured with an accuracy that is useful for understanding the relationship between glacier dynamics and flow rates.

  11. The application of camera calibration in range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is selected. One-to-one correspondence between visual filed and focal length of system is obtained and offers effective visual field information for the matching of imaging filed and illumination filed in range-gated 3-D imaging technology. On the basis of the experimental results, combined with the depth of field theory, the application of camera calibration in range-gated 3-D imaging technology is futher studied.

  12. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1994-01-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  13. A multiple-plate, multiple-pinhole camera for X-ray gamma-ray imaging

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.

    1971-01-01

    Plates with identical patterns of precisely aligned pinholes constitute lens system which, when rotated about optical axis, produces continuous high resolution image of small energy X-ray or gamma ray source. Camera has applications in radiation treatment and nuclear medicine.

  14. Hyperspectral imaging using a color camera and its application for pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  15. Geologic Analysis of the Surface Thermal Emission Images Taken by the VMC Camera, Venus Express

    NASA Astrophysics Data System (ADS)

    Basilevsky, A. T.; Shalygin, E. V.; Titov, D. V.; Markiewicz, W. J.; Scholten, F.; Roatsch, Th.; Fiethe, B.; Osterloh, B.; Michalik, H.; Kreslavsky, M. A.; Moroz, L. V.

    2010-03-01

    Analysis of Venus Monitoring Camera 1-µm images and surface emission modeling showed apparent emissivity at Chimon-mana tessera and shows that Tuulikki volcano is higher than that of the adjacent plains; Maat Mons did not show any signature of ongoing volcanism.

  16. The reconstruction of digital terrain model using panoramic camera images of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Wang, F. F.; Liu, J. J.; Ren, X.; Wang, W. R.; Mu, L. L.; Tan, X.; Li, C. L.

    2014-04-01

    The most direct and effective way of understanding the planetary topography and morphology is to build the accurate 3D planetary terrain model. Stereo images taken by the two panoramic cameras (PCAM), which were installed on the recently launched Change'E-3 rover, can be an optimal data source to assess the lunar landscape around the rover. This paper proposed a fast and efficient workflow to realtimely reconstruct the high-resolution 3D lunar terrain model, including the Digital Elevation Model (DEM) and Digital Orthophoto Map (DOM), using the PCAM stereo images. We found that the residual errors of coordinates in the adjacent images of the mosaiced DOM were within 2 pixels, and the distance deviation from the topographic data generated from the decent camera images was small. Thus, we concluded that this terrain model could satisfy the needs of identifying exploration targets and planning the rover traverse routes.

  17. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  18. Research on the affect of differential-images technique to the resolution of infrared spatial camera

    NASA Astrophysics Data System (ADS)

    Jin, Guang; An, Yuan; Qi, Yingchun; Hu, Fusheng

    2007-12-01

    The optical system of infrared spatial camera adopts bigger relative aperture and bigger pixel size on focal plane element. These make the system have bulky volume and low resolution. The potential of the optical systems can not be exerted adequately. So, one method for improving resolution of infrared spatial camera based on multi-frame difference-images is introduced in the dissertation. The method uses more than one detectors to acquire several difference images, and then reconstructs a new high-resolution image from these images through the relationship of pixel grey value. The technique of difference-images that uses more than two detectors is researched, and it can improve the resolution 2.5 times in theory. The relationship of pixel grey value between low-resolution difference-images and high-resolution image is found by analyzing the energy of CCD sampling, a general relationship between the enhanced times of the resolution of the detected figure with differential method and the least count of CCD that will be used to detect figure is given. Based on the research of theory, the implementation process of utilizing difference-images technique to improve the resolution of the figure was simulated used Matlab software by taking a personality image as the object, and the software can output the result as an image. The result gotten from the works we have finished proves that the technique is available in high-resolution image reconstruction. The resolution of infrared spatial camera can be improved evidently when holding the size of optical structure or using big size detector by applying for difference image technique. So the technique has a high value in optical remote fields.

  19. The iQID Camera An Ionizing-Radiation Quantum Imaging Detector

    SciTech Connect

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, Bradford H.; Furenlid, Lars R.

    2014-06-11

    Abstract We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. We present the latest results and discuss potential applications.

  20. The iQID camera: An ionizing-radiation quantum imaging detector

    PubMed Central

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2015-01-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector’s response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications. PMID:26166921

  1. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  2. Subtractive imaging in confocal scanning microscopy using a CCD camera as a detector.

    PubMed

    Snchez-Ortiga, Emilio; Sheppard, Colin J R; Saavedra, Genaro; Martnez-Corral, Manuel; Doblas, Ana; Calatayud, Arnau

    2012-04-01

    We report a scheme for the detector system of confocal microscopes in which the pinhole and a large-area detector are substituted by a CCD camera. The numerical integration of the intensities acquired by the active pixels emulates the signal passing through the pinhole. We demonstrate the imaging capability and the optical sectioning of the system. Subtractive-imaging confocal microscopy can be implemented in a simple manner, providing superresolution and improving optical sectioning. PMID:22466221

  3. The iQID camera: An ionizing-radiation quantum imaging detector

    NASA Astrophysics Data System (ADS)

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Bradford Barber, H.; Furenlid, Lars R.

    2014-12-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  4. Camera phasing in multi-aperture coherent imaging.

    PubMed

    Gunturk, Bahadir K; Miller, Nicholas J; Watson, Edward A

    2012-05-21

    The resolution of a diffraction-limited imaging system is inversely proportional to the aperture size. Instead of using a single large aperture, multiple small apertures are used to synthesize a large aperture. Such a multi-aperture system is modular, typically more reliable and less costly. On the other hand, a multi-aperture system requires phasing sub-apertures to within a fraction of a wavelength. So far in the literature, only the piston, tip, and tilt type of inter-aperture errors have been addressed. In this paper, we present an approach to correct for rotational and translational errors as well. PMID:22714167

  5. Engineering performance of IRIS2 infrared imaging camera and spectrograph

    NASA Astrophysics Data System (ADS)

    Churilov, Vladimir; Dawson, John; Smith, Greg A.; Waller, Lew; Whittard, John D.; Haynes, Roger; Lankshear, Allan; Ryder, Stuart D.; Tinney, Chris G.

    2004-09-01

    IRIS2, the infrared imager and spectrograph for the Cassegrain focus of the Anglo Australian Telescope, has been in service since October 2001. IRIS2 incorporated many novel features, including multiple cryogenic multislit masks, a dual chambered vacuum vessel (the smaller chamber used to reduce thermal cycle time required to change sets of multislit masks), encoded cryogenic wheel drives with controlled backlash, a deflection compensating structure, and use of teflon impregnated hard anodizing for gear lubrication at low temperatures. Other noteworthy features were: swaged foil thermal link terminations, the pupil imager, the detector focus mechanism, phased getter cycling to prevent detector contamination, and a flow-through LN2 precooling system. The instrument control electronics was designed to allow accurate positioning of the internal mechanisms with minimal generation of heat. The detector controller was based on the AAO2 CCD controller, adapted for use on the HAWAII1 detector (1024 x 1024 pixels) and is achieving low noise and high performance. We describe features of the instrument design, the problems encountered and the development work required to bring them into operation, and their performance in service.

  6. High-speed camera with real time processing for frequency domain imaging

    PubMed Central

    Shia, Victor; Watt, David; Faris, Gregory W.

    2011-01-01

    We describe a high-speed camera system for frequency domain imaging suitable for applications such as in vivo diffuse optical imaging and fluorescence lifetime imaging. 14-bit images are acquired at 2 gigapixels per second and analyzed with real-time pipeline processing using field programmable gate arrays (FPGAs). Performance of the camera system has been tested both for RF-modulated laser imaging in combination with a gain-modulated image intensifier and a simpler system based upon an LED light source. System amplitude and phase noise are measured and compared against theoretical expressions in the shot noise limit presented for different frequency domain configurations. We show the camera itself is capable of shot noise limited performance for amplitude and phase in as little as 3 ms, and when used in combination with the intensifier the noise levels are nearly shot noise limited. The best phase noise in a single pixel is 0.04 degrees for a 1 s integration time. PMID:21750770

  7. Temporally consistent virtual camera generation from stereo image sequences

    NASA Astrophysics Data System (ADS)

    Fox, Simon R.; Flack, Julien; Shao, Juliang; Harman, Phil

    2004-05-01

    The recent emergence of auto-stereoscopic 3D viewing technologies has increased demand for the creation of 3D video content. A range of glasses-free multi-viewer screens have been developed that require as many as 9 views generated for each frame of video. This presents difficulties in both view generation and transmission bandwidth. This paper examines the use of stereo video capture as a means to generate multiple scene views via disparity analysis. A machine learning approach is applied to learn relationships between disparity generated depth information and source footage, and to generate depth information in a temporally smooth manner for both left and right eye image sequences. A view morphing approach to multiple view rendering is described which provides an excellent 3D effect on a range of glasses-free displays, while providing robustness to inaccurate stereo disparity calculations.

  8. ITEMQM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    NASA Astrophysics Data System (ADS)

    Pauli, J.; Pauli, E.-M.; Anton, G.

    2002-08-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both ( the algorithm as well as the implementation ( http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License ( http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  9. Wide Field Camera 3: A Powerful New Imager for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Kimble, Randy

    2008-01-01

    Wide Field Camera 3 (WFC3) is a powerful UV/visible/near-infrared camera in development for installation into the Hubble Space Telescope during upcoming Servicing Mission 4. WFC3 provides two imaging channels. The UVIS channel incorporates a 4096 x 4096 pixel CCD focal plane with sensitivity from 200 to 1000 nm. The IR channel features a 1024 x 1024 pixel HgCdTe focal plane covering 850 to 1700 nm. We report here on the design of the instrument, the performance of its flight detectors, results of the ground test and calibration program, and the plans for the Servicing Mission installation and checkout.

  10. MONICA: A Compact, Portable Dual Gamma Camera System for Mouse Whole-Body Imaging

    PubMed Central

    Xi, Wenze; Seidel, Jurgen; Karkareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.

    2009-01-01

    Introduction We describe a compact, portable dual-gamma camera system (named MONICA for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed looking up through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV 10%, yielded the following results: spatial resolution (FWHM at 1-cm), 2.2-mm; sensitivity, 149 cps/MBq (5.5 cps/?Ci); energy resolution (FWHM), 10.8%; count rate linearity (count rate vs. activity), r2 = 0.99 for 0185 MBq (05 mCi) in the field-of-view (FOV); spatial uniformity, < 3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-minute images acquired throughout the 168-hour study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g. limited imaging space, portability, and, potentially, cost are important. PMID:20346864

  11. MONICA: a compact, portable dual gamma camera system for mouse whole-body imaging

    SciTech Connect

    Choyke, Peter L.; Xia, Wenze; Seidel, Jurgen; Kakareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.

    2010-04-01

    Introduction We describe a compact, portable dual-gamma camera system (named "MONICA" for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed ?looking up? through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV?10%, yielded the following results: spatial resolution (FWHM at 1 cm), 2.2 mm; sensitivity, 149 cps (counts per seconds)/MBq (5.5 cps/μCi); energy resolution (FWHM, full width at half maximum), 10.8%; count rate linearity (count rate vs. activity), r2=0.99 for 0?185 MBq (0?5 mCi) in the field of view (FOV); spatial uniformity, <3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-min images acquired throughout the 168-h study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g., limited imaging space, portability and, potentially, cost are important.

  12. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube is gated off between exposures.

  13. A portable device for small animal SPECT imaging in clinical gamma-cameras

    NASA Astrophysics Data System (ADS)

    Aguiar, P.; Silva-Rodrguez, J.; Gonzlez-Castao, D. M.; Pino, F.; Snchez, M.; Herranz, M.; Iglesias, A.; Lois, C.; Ruibal, A.

    2014-07-01

    Molecular imaging is reshaping clinical practice in the last decades, providing practitioners with non-invasive ways to obtain functional in-vivo information on a diversity of relevant biological processes. The use of molecular imaging techniques in preclinical research is equally beneficial, but spreads more slowly due to the difficulties to justify a costly investment dedicated only to animal scanning. An alternative for lowering the costs is to repurpose parts of old clinical scanners to build new preclinical ones. Following this trend, we have designed, built, and characterized the performance of a portable system that can be attached to a clinical gamma-camera to make a preclinical single photon emission computed tomography scanner. Our system offers an image quality comparable to commercial systems at a fraction of their cost, and can be used with any existing gamma-camera with just an adaptation of the reconstruction software.

  14. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-07-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a hemispherical sky imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images, non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated using spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80. The reconstructed spectra of the wavelengths 380-760 nm between both instruments at various directions deviate by less than 20% for all sky conditions.

  15. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-01-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a Hemispherical Sky Imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated by spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80. The reconstructed spectra of the wavelength 380 nm to 760 nm between both instruments at various directions deviate by less then 20% for all sky conditions.

  16. Measuring the image quality of digital-camera sensors by a ping-pong ball

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Rubio, Manuel; Castro, Jos J.; Salas, Carlos; Prez-Ocn, Francisco

    2014-07-01

    In this work, we present a low-cost experimental setup to evaluate the image quality of digital-camera sensors, which can be implemented in undergraduate and postgraduate teaching. The method consists of evaluating the modulation transfer function (MTF) of digital-camera sensors by speckle patterns using a ping-pong ball as a diffuser, with two handmade circular apertures acting as input and output ports, respectively. To specify the spatial-frequency content of the speckle pattern, it is necessary to use an aperture; for this, we made a slit in a piece of black cardboard. First, the MTF of a digital-camera sensor was calculated using the ping-pong ball and the handmade slit, and then the MTF was calculated using an integrating sphere and a high-quality steel slit. Finally, the results achieved with both experimental setups were compared, showing a similar MTF in both cases.

  17. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    NASA Astrophysics Data System (ADS)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  18. Fast image super-resolution for a dual-resolution camera

    NASA Astrophysics Data System (ADS)

    Chen, Kuo; Chen, Yueting; Feng, Huajun; Xu, Zhihai

    2015-06-01

    High-spatial resolution and wide field of view (FOV) can be satisfied simultaneously with a dual-sensor camera. A special kind of dual-sensor camera named dual-resolution camera has been designed and manufactured; therefore, a high-resolution image with narrow FOV and another low-resolution image with wide FOV are captured by one shot. To generate a high-resolution image with wide FOV, a fast super-resolution reconstruction is proposed, which is composed of wavelet-based super-resolution and back projection. During wavelet-based super-solution, the high-resolution image captured is used to learn the co-occurrence prior by a linear regression function. At last, low-resolution image is reconstructed based on the learnt co-occurrence prior. Simulation and real experiments are carried out, and three other common super-resolution algorithms are compared. The experimental results show that the proposed method reduces time cost significantly, and achieves excellent performance with high PSNR and SSIM.

  19. Desktop 2D/3D image capturing system using a digital camera

    NASA Astrophysics Data System (ADS)

    Kitaguchi, Takashi; Kitazawa, Tomofumi; Sato, Yasuhiro; Aoki, Shin; Hasegawa, Takefumi

    2002-03-01

    As electronic documents are becoming more and more popular, variety of objects, including 2D and 3D objects such as articles, books and 3D-shapes, can be easily contained in a document. Conventional systems cannot capture these objects as a 3D form. We developed a new image capturing system with 3D information for deskwork. It is an assemblage of a normal digital camera and its docking station designed for easy operation. The docking station swings the attached camera tilting step by step. Within a few steps, it covers the whole object automatically. Then each frame is combined together into one complete image with full resolution. In order to get precise 3D structure, stripe patterns are projected by a modified flash light attached on the camera. The resolution rises by means of the swing to be slipped the patterns on the object. Using the obtained 3D data we can reconstruct a correct image from a splay surface image such as a book. Experimental Results of image mosaicking and 3D reconstruction shows that the system is practical.

  20. Adaptive optics flood-illumination camera for high speed retinal imaging

    NASA Astrophysics Data System (ADS)

    Rha, Jungtae; Jonnal, Ravi S.; Thorn, Karen E.; Qu, Junle; Zhang, Yan; Miller, Donald T.

    2006-05-01

    Current adaptive optics flood-illumination retina cameras operate at low frame rates, acquiring retinal images below seven Hz, which restricts their research and clinical utility. Here we investigate a novel bench top flood-illumination camera that achieves significantly higher frame rates using strobing fiber-coupled superluminescent and laser diodes in conjunction with a scientific-grade CCD. Source strength was sufficient to obviate frame averaging, even for exposures as short as 1/3 msec. Continuous frame rates of 10, 30, and 60 Hz were achieved for imaging 1.8,0.8, and 0.4 deg retinal patches, respectively. Short-burst imaging up to 500 Hz was also achieved by temporarily storing sequences of images on the CCD. High frame rates, short exposure durations (1 msec), and correction of the most significant aberrations of the eye were found necessary for individuating retinal blood cells and directly measuring cellular flow in capillaries. Cone videos of dark adapted eyes showed a surprisingly rapid fluctuation (~1 Hz) in the reflectance of single cones. As further demonstration of the value of the camera, we evaluated the tradeoff between exposure duration and image blur associated with retina motion.

  1. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  2. Synthesizing wide-angle and arbitrary view-point images from a circular camera array

    NASA Astrophysics Data System (ADS)

    Fukushima, Norishige; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    2006-02-01

    We propose a technique of Imaged-Based Rendering(IBR) using a circular camera array. By the result of having recorded the scene as surrounding the surroundings, we can synthesize a more dynamic arbitrary viewpoint images and a wide angle images like a panorama . This method is based on Ray- Space, one of the image-based rendering, like Light Field. Ray-Space is described by the position (x, y) and a direction (θ, φ) of the ray's parameter which passes a reference plane. All over this space, when the camera has been arranged circularly, the orbit of the point equivalent to an Epipor Plane Image(EPI) at the time of straight line arrangement draws a sin curve. Although described in a very clear form, in case a rendering is performed, pixel of which position of which camera being used and the work for which it asks become complicated. Therefore, the position (u, v) of the position (s, t) pixel of a camera like Light Filed redescribes space expression. It makes the position of a camera a polar-coordinates system (r, theta), and is making it close to description of Ray-Space. Thereby, although the orbit of a point serves as a complicated periodic function of periodic 2pi, the handling of a rendering becomes easy. From such space, the same as straight line arrangement, arbitrary viewpoint picture synthesizing is performed only due to a geometric relationship between cameras. Moreover, taking advantage of the characteristic of concentrating on one circular point, we propose the technique of generating a wide-angle picture like a panorama. When synthesizing a viewpoint, since it is overlapped and is recording the ray of all the directions of the same position, this becomes possible. Having stated until now is the case where it is a time of the camera fully having been arranged and a plenoptic sampling being filled. The discrete thing which does not fill a sampling is described from here. When arranging a camera in a straight line and compounding a picture, in spite of assuming the pinhole camera model, an effect like a focus shows up. This is an effect peculiar to Light Field when a sampling is not fully performed, and is called a synthetic aperture. We have compounded all focal images by processing called an "Adaptive Filter" to such a phenomenon. An adaptive filter is the method of making the parallax difference map of perfect viewpoint dependence centering on a viewpoint to make. This is a phenomenon produced even when it has arranged circularly. Then, in circular camera arrangement, this adaptive filter is extended, and all focal pictures are compounded. Although there is a problem that an epipor line is not parallel etc. when it has arranged circularly, extension obtains enough, it comes only out of geometric information, and a certain thing is clarified By taking such a method, it succeeded in performing a wide angle and arbitrary viewpoint image synthesis also from discrete space also from the fully sampled space.

  3. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  4. Environmental applications of camera images calibrated by means of the Levenberg-Marquardt method

    NASA Astrophysics Data System (ADS)

    Prez Muoz, J. C.; Ortiz Alarcn, C. A.; Osorio, A. F.; Meja, C. E.; Medina, R.

    2013-02-01

    Even though different authors have presented procedures for camera calibration in environmental video monitoring, improvements in the robustness and accuracy of the calibration procedure are always desired and in this work the Levenberg-Marquardt method is included in the camera calibration process for environmental video monitoring images as a way to improve the robustness of the camera calibration when a low number of control points is available without using laboratory measurements. The Pinhole model and the Levenberg-Marquardt method are briefly described and a four step camera calibration procedure using them is presented. This procedure allows users to use ground control points to estimate all the Pinhole model parameters, including the lens distortion parameters and its implementation results with laboratory data are compared with the results presented by other authors. The procedure is also tested with field data obtained with cameras directed toward the beaches of the city of Cartagena, Colombia. The results show that the procedure is robust enough to be used when just a low number of control points are available, even though a large number of GCP is recommended to obtain high accuracy.

  5. Automatic Generation of Passer-by Record Images using Internet Camera

    NASA Astrophysics Data System (ADS)

    Terada, Kenji; Atsuta, Koji

    Recently, many brutal crimes have shocked us. On the other hand, we have seen a decline in the number of solved crimes. Therefore, the importance of security and self-defense has increased more and more. As an example of self-defense, many surveillance cameras are set up in the buildings, homes and offices. But even if we want to detect a suspicious person, we cannot check the surveillance videos immediately so that huge number of image sequences is stored in each video system. In this paper, we propose an automatic method of generating passer-by record images by using internet camera. In first step, the process of recognizing passer-by is carried out using an image sequence obtained from the internet camera. Our method classifies the subject region into each person by using the space-time image. In addition, we obtain the information of the time, direction and number of passey-by persons from this space-time image. Next, the present method detects five characteristics: the gravity of center, the position of person's head, the brightness, the size, and the shape of person. Finaly, an image of each person is selected among the image sequence by integrating five characteristics, and is added into the passer-by record image. Some experimental results using a simple experimental system are also reported, which indicate effectiveness of the proposed method. In most scenes, the every persons was able to be detected by the proposed method and the passer-by record image was generated.

  6. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.

  7. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    PubMed Central

    Andreozzi, Jacqueline M.; Zhang, Rongxiao; Glaser, Adam K.; Jarvis, Lesley A.; Pogue, Brian W.; Gladstone, David J.

    2015-01-01

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 512, compared to an ICCD which was limited to 4.7 fps at 1024 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 1024 and lower monetary cost than the EM-ICCD. Conclusions: The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 1024. PMID:25652512

  8. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    SciTech Connect

    Andreozzi, Jacqueline M. Glaser, Adam K.; Zhang, Rongxiao; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.

    2015-02-15

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. Conclusions: The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.

  9. Image analysis of oronasal fistulas in cleft palate patients acquired with an intraoral camera.

    PubMed

    Murphy, Tania C; Willmot, Derrick R

    2005-01-01

    The aim of this study was to examine the clinical technique of using an intraoral camera to monitor the size of residual oronasal fistulas in cleft lip-cleft palate patients, to assess its repeatability on study casts and patients, and to compare its use with other methods. Seventeen plaster study casts of cleft palate patients with oronasal fistulas obtained from a 5-year series of 160 patients were used. For the clinical study, 13 patients presenting in a clinic prospectively over a 1-year period were imaged twice by the camera. The area of each fistula on each study cast was measured in the laboratory first using a previously described graph paper and caliper technique and second with the intraoral camera. Images were imported into a computer and subjected to image enhancement and area measurement. The camera was calibrated by imaging a standard periodontal probe within the fistula area. The measurements were repeated using a double-blind technique on randomly renumbered casts to assess the repeatability of measurement of the methods. The clinical images were randomly and blindly numbered and subjected to image enhancement and processing in the same way as for the study casts. Area measurements were computed. Statistical analysis of repeatability of measurement using a paired sample t test showed no significant difference between measurements, indicating a lack of systematic error. An intraclass correlation coefficient of 0.97 for the graph paper and 0.84 for the camera method showed acceptable random error between the repeated records for each of the two methods. The graph paper method remained slightly more repeatable. The mean fistula area of the study casts between each method was not statistically different when compared with a paired samples t test (p = 0.08). The methods were compared using the limits of agreement technique, which showed clinically acceptable repeatability. The clinical study of repeated measures showed no systematic differences when subjected to a t test (p = 0.109) and little random error with an intraclass correlation coefficient of 0.98. The fistula size seen in the clinical study ranged from 18.54 to 271.55 mm. Direct measurements subsequently taken on 13 patients in the clinic without study models showed a wide variation in the size of residual fistulas presenting in a multidisciplinary clinic. It was concluded that an intraoral camera method could be used in place of the previous graph paper method and could be developed for clinical and scientific purposes. This technique may offer advantages over the graph paper method, as it facilitates easy visualization of oronasal fistulas and objective fistulas size determination and permits easy storage of data in clinical records. PMID:15622228

  10. Real-time analysis of laser beams by simultaneous imaging on a single camera chip

    NASA Astrophysics Data System (ADS)

    Piehler, S.; Boley, M.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    The fundamental parameters of a laser beam, such as the exact position and size of the focus or the beam quality factor M yield vital information both for laser developers and end-users. However, each of these parameters can significantly change on a short time scale due to thermally induced effects in the processing optics or in the laser source itself, leading to process instabilities and non-reproducible results. In order to monitor the transient behavior of these effects, we have developed a camera-based measurement system, which enables full laser beam characterization in online. A novel monolithic beam splitter has been designed which generates a 2D array of images on a single camera chip, each of which corresponds to an intensity cross section of the beam along the propagation axis separated by a well-defined spacing. Thus, using the full area of the camera chip, a large number of measurement planes is achieved, leading to a measurement range sufficient for a full beam characterization conforming to ISO 11146 for a broad range of beam parameters of the incoming beam. The exact beam diameters in each plane are derived by calculation of the 2nd order intensity moments of the individual intensity slices. The processing time needed to carry out both the background filtering and the image processing operations for the full analysis of a single camera image is in the range of a few milliseconds. Hence, the measurement frequency of our system is mainly limited by the frame-rate of the camera.

  11. Optimal Design of Anger Camera for Bremsstrahlung Imaging: Monte Carlo Evaluation

    PubMed Central

    Walrand, Stephan; Hesse, Michel; Wojcik, Randy; Lhommel, Renaud; Jamar, Franois

    2014-01-01

    A conventional Anger camera is not adapted to bremsstrahlung imaging and, as a result, even using a reduced energy acquisition window, geometric x-rays represent <15% of the recorded events. This increases noise, limits the contrast, and reduces the quantification accuracy. Monte Carlo (MC) simulations of energy spectra showed that a camera based on a 30-mm-thick BGO crystal and equipped with a high energy pinhole collimator is well-adapted to bremsstrahlung imaging. The total scatter contamination is reduced by a factor 10 versus a conventional NaI camera equipped with a high energy parallel hole collimator enabling acquisition using an extended energy window ranging from 50 to 350?keV. By using the recorded event energy in the reconstruction method, shorter acquisition time and reduced orbit range will be usable allowing the design of a simplified mobile gantry. This is more convenient for use in a busy catheterization room. After injecting a safe activity, a fast single photon emission computed tomography could be performed without moving the catheter tip in order to assess the liver dosimetry and estimate the additional safe activity that could still be injected. Further long running time MC simulations of realistic acquisitions will allow assessing the quantification capability of such system. Simultaneously, a dedicated bremsstrahlung prototype camera reusing PMTBGO blocks coming from a retired PET system is currently under design for further evaluation. PMID:24982849

  12. Portable, stand-off spectral imaging camera for detection of effluents and residues

    NASA Astrophysics Data System (ADS)

    Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason

    2015-06-01

    A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.

  13. Optimal design of anger camera for bremsstrahlung imaging: monte carlo evaluation.

    PubMed

    Walrand, Stephan; Hesse, Michel; Wojcik, Randy; Lhommel, Renaud; Jamar, Franois

    2014-01-01

    A conventional Anger camera is not adapted to bremsstrahlung imaging and, as a result, even using a reduced energy acquisition window, geometric x-rays represent <15% of the recorded events. This increases noise, limits the contrast, and reduces the quantification accuracy. Monte Carlo (MC) simulations of energy spectra showed that a camera based on a 30-mm-thick BGO crystal and equipped with a high energy pinhole collimator is well-adapted to bremsstrahlung imaging. The total scatter contamination is reduced by a factor 10 versus a conventional NaI camera equipped with a high energy parallel hole collimator enabling acquisition using an extended energy window ranging from 50 to 350?keV. By using the recorded event energy in the reconstruction method, shorter acquisition time and reduced orbit range will be usable allowing the design of a simplified mobile gantry. This is more convenient for use in a busy catheterization room. After injecting a safe activity, a fast single photon emission computed tomography could be performed without moving the catheter tip in order to assess the liver dosimetry and estimate the additional safe activity that could still be injected. Further long running time MC simulations of realistic acquisitions will allow assessing the quantification capability of such system. Simultaneously, a dedicated bremsstrahlung prototype camera reusing PMT-BGO blocks coming from a retired PET system is currently under design for further evaluation. PMID:24982849

  14. New design of a gamma camera detector with reduced edge effect for breast imaging

    NASA Astrophysics Data System (ADS)

    Yeon Hwang, Ji; Lee, Seung-Jae; Baek, Cheol-Ha; Hyun Kim, Kwang; Hyun Chung, Yong

    2011-05-01

    In recent years, there has been a growing interest in developing small gamma cameras dedicated to breast imaging. We designed a new detector with trapezoidal shape to expand the field of view (FOV) of camera without increasing its dimensions. To find optimal parameters, images of point sources at the edge area as functions of the angle and optical treatment of crystal side surface were simulated by using a DETECT2000. Our detector employs monolithic CsI(Tl) with dimensions of 48.048.06.0 mm coupled to an array of photo-sensors. Side surfaces of crystal were treated with three different surface finishes: black absorber, metal reflector and white reflector. The trapezoidal angle varied from 45 to 90 in steps of 15. Gamma events were generated on 15 evenly spaced points with 1.0 mm spacing in the X-axis starting 1.0 mm away from the side surface. Ten thousand gamma events were simulated at each location and images were formed by calculating the Anger-logic. The results demonstrated that all the 15 points could be identified only for the crystal with trapezoidal shape having 45 angle and white reflector on the side surface. In conclusion, our new detector proved to be a reliable design to expand the FOV of small gamma camera for breast imaging.

  15. Influence of electron dose rate on electron counting images recorded with the K2 camera.

    PubMed

    Li, Xueming; Zheng, Shawn Q; Egami, Kiyoshi; Agard, David A; Cheng, Yifan

    2013-11-01

    A recent technological breakthrough in electron cryomicroscopy (cryoEM) is the development of direct electron detection cameras for data acquisition. By bypassing the traditional phosphor scintillator and fiber optic coupling, these cameras have greatly enhanced sensitivity and detective quantum efficiency (DQE). Of the three currently available commercial cameras, the Gatan K2 Summit was designed specifically for counting individual electron events. Counting further enhances the DQE, allows for practical doubling of detector resolution and eliminates noise arising from the variable deposition of energy by each primary electron. While counting has many advantages, undercounting of electrons happens when more than one electron strikes the same area of the detector within the analog readout period (coincidence loss), which influences image quality. In this work, we characterized the K2 Summit in electron counting mode, and studied the relationship of dose rate and coincidence loss and its influence on the quality of counted images. We found that coincidence loss reduces low frequency amplitudes but has no significant influence on the signal-to-noise ratio of the recorded image. It also has little influence on high frequency signals. Images of frozen hydrated archaeal 20S proteasome (~700 kDa, D7 symmetry) recorded at the optimal dose rate retained both high-resolution signal and low-resolution contrast and enabled calculating a 3.6 three-dimensional reconstruction from only 10,000 particles. PMID:23968652

  16. Heart imaging by cadmium telluride gamma cameraEuropean Program ``BIOMED'' consortium

    NASA Astrophysics Data System (ADS)

    Scheiber, Ch.; Eclancher, B.; Chambron, J.; Prat, V.; Kazandjan, A.; Jahnke, A.; Matz, R.; Thomas, S.; Warren, S.; Hage-Hali, M.; Regal, R.; Siffert, P.; Karman, M.

    1999-06-01

    Cadmium telluride semiconductor detectors (CdTe) operating at room temperature are attractive for medical imaging because of their good energy resolution providing excellent spatial and contrast resolution. The compactness of the detection system allows the building of small light camera heads which can be used for bedside imaging. A mobile pixellated gamma camera based on 2304 CdTe (pixel size: 3×3 mm, field of view: 15 cm×15 cm) has been designed for cardiac imaging. A dedicated 16-channel integrated circuit has also been designed. The acquisition hardware is fully programmable (DSP card, personal computer-based system). Analytical calculations have shown that a commercial parrallel hole collimator will fit the efficiency/resolution requirements for cardiac applications. Monte-Carlo simulations predict that the Moire effect can be reduced by a 15° tilt of the collimator with respect to the detector grid. A 16×16 CdTe module has been built for the preliminary physical tests. The energy resolution was 6.16±0.6 keV (mean ± standard deviation, n=30). Uniformity was ±10%, improving to ±1% when using a correction table. Test objects (emission data: letters 1.8 mm in width) and cold rods in scatter medium have been acquired. The CdTe images have been compared to those acquired with a conventionnal gamma camera.

  17. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera.

    PubMed

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho

    2015-12-10

    In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods. PMID:26836855

  18. First responder thermal imaging cameras: establishment of representative performance testing conditions

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Hamins, Anthony; Rowe, Justin

    2006-04-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently there are no standardized performance metrics or test methods available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory (BFRL) at the National Institute of Standards and Technology is conducting research to establish test conditions that best represent the environment in which these cameras are used. First responders may use thermal imagers for field operations ranging from fire attack and search/rescue in burning structures, to hot spot detection in overhaul activities, to detecting the location of hazardous materials. In order to develop standardized performance metrics and test methods that capture the harsh environment in which these cameras may be used, information has been collected from the literature, and from full-scale tests that have been conducted at BFRL. Initial experimental work has focused on temperature extremes and the presence of obscuring media such as smoke. In full-scale tests, thermal imagers viewed a target through smoke, dust, and steam, with and without flames in the field of view. The fuels tested were hydrocarbons (methanol, heptane, propylene, toluene), wood, upholstered cushions, and carpeting with padding. Gas temperatures, CO, CO II, and O II volume fraction, emission spectra, and smoke concentrations were measured. Simple thermal bar targets and a heated mannequin fitted in firefighter gear were used as targets. The imagers were placed at three distances from the targets, ranging from 3 m to 12 m.

  19. Initial Observations by the MRO Mars Color Imager and Context Camera

    NASA Astrophysics Data System (ADS)

    Malin, M. C.

    2006-12-01

    The Mars Color Imager (MARCI) on MRO is a copy of the wide angle instrument flown on the unsuccessful Mars Climate Orbiter. It consists of two optical systems (visible and ultraviolet) projecting images onto a single CCD detector. The field of view of the optics is 180 degrees cross-track, sufficient to image limb-to-limb even when the MRO spacecraft is pointing off-nadir by 20 degrees. MARCI can image in two UV (260 and 320 15 nm) and five visible (425, 550, 600, 650, and 750 nm, 25 nm) channels. The visible channels have a nadir scale of about 900 m, and a limb scale of just under 5 km; the UV channels are summed to 7-8 km nadir scale. Daily global observations, consisting of 12 terminator to terminator, limb-to-limb swaths are used to monitor meteorological conditions, clouds, dust storms, and ozone concentration (a surrogate for water). During high data rate periods, MARCI can reproduce the Mariner 9 global image mosaic every day. The Context Camera (CTX) acquires 30 km wide, 6 m/pixel images, and is new camera derived from the MCO medium angle MARCI. Its primary purpose is to provide spatial context for MRO instruments with higher resolution, or more limited fields of view. Scientifically, CTX acquires images that are nearly as high in spatial resolution as the Mars Orbiter Camera aboard MGS. CTX can cover about 9 percent of Mars, but stereoscopic coverage, overlap for mosaics, and re-imaging locations to search for changes will reduce this coverage significantly.

  20. Imaging of blood vessels with CCD-camera based three-dimensional photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Nuster, Robert; Slezak, Paul; Paltauf, Guenther

    2014-03-01

    An optical phase contrast full field detection setup in combination with a CCD-camera is presented to record acoustic fields for real-time projection and fast three-dimensional imaging. When recording projection images of the wave pattern around the imaging object, the three-dimensional photoacoustic imaging problem is reduced to a set of two-dimensional reconstructions and the measurement setup requires only a single axis of rotation. Using a 10 Hz pulse laser system for photoacoustic excitation a three dimensional image can be obtained in less than 1 min. The sensitivity and resolution of the detection system was estimated experimentally with 5 kPa mm and 75?m, respectively. Experiments on biological samples show the applicability of this technique for the imaging of blood vessel distributions.

  1. CCD camera baseline calibration and its effects on imaging processing and laser beam analysis

    NASA Astrophysics Data System (ADS)

    Roundy, Carlos B.

    1997-09-01

    CCD cameras are commonly used for many imaging applications, as well as in optical instrumentation applications. These cameras have many excellent characteristics for both scene imaging and laser beam analysis. However, CCD cameras have two characteristics that limit their potential performance. The first limiting factor is the baseline drift of the camera. If the baseline drifts below the digitizer zero, data in the background is lost, and is uncorrectable. If the baseline drifts above the digitizer zero, than a false background is introduced into the scene. This false background is partially correctable by taking a background frame with no input image, and then subtracting that from each imaged frame. ('Partially correctable' is explained in detail later.) The second characteristic that inhibits CCD cameras is their high level of random noise. A typical CCD camera used with an 8-bit digitizer yielding 256 counts, has 2 to 6 counts of random noise in the baseline. The noise is typically Gaussian, and goes both positive and negative about a mean or average baseline level. When normal baseline subtraction occurs, the negative noise components are truncated, leaving only the positive components. These lost negative noise components can distort measurements that rely on low intensity background. Situations exist in which the baseline offset and lost negative noise components are very significant. For example, in image processing, when attempting to distinguish data with a very low contrast between objects, the contrast is compromised by the loss of the negative noise. Secondly the measurement of laser beam widths requires analysis of very low intensity signals far out into the wings of the beam. The intensity is low, but the area is large, and so even small distortion can create significant errors in measuring beam width. The effect of baseline error is particularly significant on the measurement of a laser beam width. This measurement is very important because it gives the size of the beam at the measurement point, it is used in laser divergence measurement, and it is critical for realistic measurement of M2, the ultimate criterion for the quality of a laser beam. One measurement of laser beam width, called second moment, or D4(sigma) , which is the ISO definition of a true laser beam width, is especially sensitive to noise in the baseline. The D4(sigma) measurement method integrates all signals far out into the wings of the beam, and gives particular weight to the noise and signal in the wings. It is impossible to make this measurement without the negative noise components, and without other special algorithms to limit the effect of noise in the wings.

  2. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  3. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  4. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  5. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  6. X-ray and gamma-ray imaging with multiple-pinhole cameras using a posteriori image synthesis.

    NASA Technical Reports Server (NTRS)

    Groh, G.; Hayat, G. S.; Stroke, G. W.

    1972-01-01

    In 1968, Dicke had suggested that multiple-pinhole camera systems would have significant advantages concerning the SNR in X-ray and gamma-ray astronomy if the multiple images could be somehow synthesized into a single image. The practical development of an image-synthesis method based on these suggestions is discussed. A formulation of the SNR gain theory which is particularly suited for dealing with the proposal by Dicke is considered. It is found that the SNR gain is by no means uniform in all X-ray astronomy applications.

  7. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    USGS Publications Warehouse

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  8. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    NASA Astrophysics Data System (ADS)

    Kern, Christoph; Lbcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Franois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-07-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  9. Range camera calibration based on image sequences and dense comprehensive error statistics

    NASA Astrophysics Data System (ADS)

    Karel, Wilfried; Pfeifer, Norbert

    2009-01-01

    This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that investigate individual distortion factors separately, in the presented approach all calculations are based on the same data set that is captured without auxiliary devices serving as high-order reference, but with the camera being guided by hand. Flat, circular targets stuck on a planar whiteboard and with known positions are automatically tracked throughout the amplitude layer of long image sequences. These image observations are introduced into a bundle block adjustment, which on the one hand results in the determination of the interior orientation. Capitalizing the known planarity of the imaged board, the reconstructed exterior orientations furthermore allow for the derivation of reference values of the actual distance observations. Eased by the automatic reconstruction of the cameras trajectory and attitude, comprehensive statistics are generated, which are accumulated into a 5-dimensional matrix in order to be manageable. The marginal distributions of this matrix are inspected for the purpose of system identification, whereupon its elements are introduced into another least-squares adjustment, finally leading to clear range correction models and parameters.

  10. Camera model and calibration process for high-accuracy digital image metrology of inspection planes

    NASA Astrophysics Data System (ADS)

    Correia, Bento A. B.; Dinis, Joao

    1998-10-01

    High accuracy digital image based metrology must rely on an integrated model of image generation that is able to consider simultaneously the geometry of the camera vs. object positioning, and the conversion of the optical image on the sensor into an electronic digital format. In applications of automated visual inspection involving the analysis of approximately plane objects these models are generally simplified in order to facilitate the process of camera calibration. In this context, the lack of rigor in the determination of the intrinsic parameters in such models is particularly relevant. Aiming at the high accuracy metrology of contours of objects lying on an analysis plane, and involving sub-pixel measurements, this paper presents a three-stage camera model that includes an extrinsic component of perspective distortion and the intrinsic components of radial lens distortion and sensor misalignment. The later two factors are crucial in applications of machine vision that rely on the use of low cost optical components. A polynomial model for the negative radial lens distortion of wide field of view CCTV lenses is also established.

  11. Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera

    NASA Astrophysics Data System (ADS)

    Viard, Clment; Nakashima, Kiyoko; Lamory, Barbara; Pques, Michel; Levecq, Xavier; Chteau, Nicolas

    2011-03-01

    This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4x4 were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.

  12. Improving estimates of leaf area index by processing RAW images in upward-pointing-digital cameras

    NASA Astrophysics Data System (ADS)

    Jeon, S.; Ryu, Y.

    2013-12-01

    Leaf Area Index (LAI) measurement using upward-pointing digital camera in the forest floor has gained great attentions due to the feasibility of measuring LAI continuously at high accuracy. However, using upward-pointing digital camera could underestimate LAI when photos are exposed to excessive light conditions which make leaves near the sky in the photo disappeared. Processing RAW images could reduce possibility of LAI underestimation. This study aims to develop RAW image processing and compare RAW-derived LAI to JPEG-derived LAI. Digital photos have been automatically taken three times per day (0.5, 1, 1.5 hours before sunset) in both RAW and JPEG formats at Gwangreung deciduous and evergreen forests in South Korea. We used blue channel of RAW images to quantify gap fraction, then LAI. LAI estimates from JPEG and RAW images do not show substantial differences in the deciduous forest. However, LAI derived from RAW images at evergreen forest where forest floor is fairly dark even in daytime shows substantially less noise and greater values than JPEG-derived LAI. This study concludes that LAI estimates should be derived from RAW images for more accurate measurement of LAI.

  13. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  14. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  15. Diffuse reflection imaging of sub-epidermal tissue haematocrit using a simple RGB camera

    NASA Astrophysics Data System (ADS)

    Leahy, Martin J.; O'Doherty, Jim; McNamara, Paul; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Sjoberg, Folke

    2007-05-01

    This paper describes the design and evaluation of a novel easy to use, tissue viability imaging system (TiVi). The system is based on the methods of diffuse reflectance spectroscopy and polarization spectroscopy. The technique has been developed as an alternative to current imaging technology in the area of microcirculation imaging, most notably optical coherence tomography (OCT) and laser Doppler perfusion imaging (LDPI). The system is based on standard digital camera technology, and is sensitive to red blood cells (RBCs) in the microcirculation. Lack of clinical acceptance of both OCT and LDPI fuels the need for an objective, simple, reproducible and portable imaging method that can provide accurate measurements related to stimulus vasoactivity in the microvasculature. The limitations of these technologies are discussed in this paper. Uses of the Tissue Viability system include skin care products, drug development, and assessment spatial and temporal aspects of vasodilation (erythema) and vasoconstriction (blanching).

  16. Advances In The Image Sensor: The Critical Element In The Performance Of Cameras

    NASA Astrophysics Data System (ADS)

    Narabu, Tadakuni

    2011-01-01

    Digital imaging technology and digital imaging products are advancing at a rapid pace. The progress of digital cameras has been particularly impressive. Image sensors now have smaller pixel size, a greater number of pixels, higher sensitivity, lower noise and a higher frame rate. Picture resolution is a function of the number of pixels of the image sensor. The more pixels there are, the smaller each pixel, but the sensitivity and the charge-handling capability of each pixel can be maintained or even be increased by raising the quantum efficiency and the saturation capacity of the pixel per unit area. Sony's many technologies can be successfully applied to CMOS Image Sensor manufacturing toward sub-2.0 um pitch pixel and beyond.

  17. Improved Digitization of Lunar Mare Ridges with LROC Derived Products

    NASA Astrophysics Data System (ADS)

    Crowell, J. M.; Robinson, M. S.; Watters, T. R.; Bowman-Cisneros, E.; Enns, A. C.; Lawrence, S.

    2011-12-01

    Lunar wrinkle ridges (mare ridges) are positive-relief structures formed from compressional stress in basin-filling flood basalt deposits [1]. Previous workers have measured wrinkle ridge orientations and lengths to investigate their spatial distribution and infer basin-localized stress fields [2,3]. Although these plots include the most prominent mare ridges and their general trends, they may not have fully captured all of the ridges, particularly the smaller-scale ridges. Using Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) global mosaics and derived topography (100m pixel scale) [4], we systematically remapped wrinkle ridges in Mare Serenitatis. By comparing two WAC mosaics with different lighting geometry, and shaded relief maps made from a WAC digital elevation model (DEM) [5], we observed that some ridge segments and some smaller ridges are not visible in previous structure maps [2,3]. In the past, mapping efforts were limited by a fixed Sun direction [6,7]. For systematic mapping we created three shaded relief maps from the WAC DEM with solar azimuth angles of 0, 45, and 90, and a fourth map was created by combining the three shaded reliefs into one, using a simple averaging scheme. Along with the original WAC mosaic and the WAC DEM, these four datasets were imported into ArcGIS, and the mare ridges of Imbrium, Serenitatis, and Tranquillitatis were digitized from each of the six maps. Since the mare ridges are often divided into many ridge segments [8], each major component was digitized separately, as opposed to the ridge as a whole. This strategy enhanced our ability to analyze the lengths, orientations, and abundances of these ridges. After the initial mapping was completed, the six products were viewed together to identify and resolve discrepancies in order to produce a final wrinkle ridge map. Comparing this new mare ridge map with past lunar tectonic maps, we found that many mare ridges were not recorded in the previous works. It was noted in some cases, the lengths and orientations of previously digitized ridges were different than those of the ridges digitized in this study. This method of multi-map digitizing allows for a greater accuracy in spatial characterization of mare ridges than previous methods. We intend to map mare ridges on a global scale, creating a more comprehensive ridge map due to higher resolution. References Cited: [1] Schultz P.H. (1976) Moon Morphology, 308. [2] Wilhelms D.E. (1987) USGS Prof. Paper 1348, 5A-B. [3] Carr, M.H. (1966) USGS Geologic Atlas of the Moon, I-498. [4] Robinson M.S. (2010) Space Sci. Rev., 150:82. [5] Scholten F. et al. (2011) LPSC XLII, 2046. [6] Fielder G. and Kiang T. (1962) The Observatory: No. 926, 8. [7] Watters T.R. and Konopliv A.S. (2001) Planetary and Space Sci. 49. 743-748. [8] Aubele J.C. (1988) LPSC XIX, 19.

  18. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    PubMed Central

    Dengel, Lynn T.; More, Mitali J.; Judy, Patricia G.; Petroni, Gina R.; Smolkin, Mark E.; Rehm, Patrice K.; Majewski, Stan; Williams, Mark B.; Slingluff, Craig L.

    2016-01-01

    Objective To evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. Background The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. Methods From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Results Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 19 (1 = best). Conclusions Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma. PMID:21475019

  19. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    SciTech Connect

    Dengel, Lynn T; Judy, Patricia G; Petroni, Gina R; Smolkin, Mark E; Rehm, Patrice K; Majewski, Stan; Williams, Mark B

    2011-04-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 1-9 (1 = best). Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma.

  20. Can We Trust the Use of Smartphone Cameras in Clinical Practice? Laypeople Assessment of Their Image Quality

    PubMed Central

    Boissin, Constance; Fleming, Julian; Wallis, Lee; Hasselberg, Marie

    2015-01-01

    Abstract Background: Smartphone cameras are rapidly being introduced in medical practice, among other devices for image-based teleconsultation. Little is known, however, about the actual quality of the images taken, which is the object of this study. Materials and Methods: A series of nonclinical objects (from three broad categories) were photographed by a professional photographer using three smartphones (iPhone® 4 [Apple, Cupertino, CA], Samsung [Suwon, Korea] Galaxy S2, and BlackBerry® 9800 [BlackBerry Ltd., Waterloo, ON, Canada]) and a digital camera (Canon [Tokyo, Japan] Mark II). In a Web survey a convenience sample of 60 laypeople “blind” to the types of camera assessed the quality of the photographs, individually and best overall. We then measured how each camera scored by object category and as a whole and whether a camera ranked best using a Mann–Whitney U test for 2×2 comparisons. Results: There were wide variations between and within categories in the quality assessments for all four cameras. The iPhone had the highest proportion of images individually evaluated as good, and it also ranked best for more objects compared with other cameras, including the digital one. The ratings of the Samsung or the BlackBerry smartphone did not significantly differ from those of the digital camera. Conclusions: Whereas one smartphone camera ranked best more often, all three smartphones obtained results at least as good as those of the digital camera. Smartphone cameras can be a substitute for digital cameras for the purposes of medical teleconsulation. PMID:26076033

  1. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  2. Electron-tracking Compton gamma-ray camera for small animal and phantom imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Kimura, Hiroyuki; Amano, Hiroo; Nakamoto, Yuji; Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki; Kawashima, Hidekazu; Ueda, Masashi; Okada, Tomohisa; Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki; Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji; Ogawa, Koichi; Togashi, Kaori; Saji, Hideo; Tanimori, Toru

    2010-11-01

    We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.

  3. Formulation of image quality prediction criteria for the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.

    1973-01-01

    Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.

  4. The facsimile camera - Its potential as a planetary lander imaging system

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Katzberg, S. J.; Kelly, W. L.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which is an attractive candidate for planetary lander imaging systems and has been selected for the Viking/Mars mission because of its light weight, small size, and low power requirement. Other advantages are that it can provide good radiometric and photogrammetric accuracy because the complete field of view is scanned with a single photodetector located on or near the optical axis of the objective lens. In addition, this device has the potential capability of multispectral imaging and spectrometric measurements.

  5. High-frame-rate CCD cameras with fast optical shutters for military and medical imaging applications

    NASA Astrophysics Data System (ADS)

    King, Nicholas S. P.; Albright, Kevin L.; Jaramillo, Steven A.; McDonald, Thomas E.; Yates, George J.; Turko, Bojan T.

    1994-10-01

    Los Alamos National Laboratory (LANL) has designed and prototyped high-frame rate intensified/shuttered Charge-Coupled-Device (CCD) cameras capable of operating at Kilohertz frame rates (non-interfaced mode) with optical shutters capable of acquiring nanosecond-to- microsecond exposures each frame. These cameras utilize an Interline Transfer CCD, Loral Fairchild CCD-222 with 244 (vertical) X 380 (horizontal) pixels operated at pixel rates approaching 100 Mhz. Initial prototype designs demonstrated single-port serial readout rates exceeding 2.97 Kilohertz with greater than 5 lp/mm spatial resolution at shutter speeds as short as 5 ns. Readout was achieved by using a truncated format of 128 X 128 pixels by partial masking of the CCD and then subclocking the array at approximately 65 Mhz pixel rate. Shuttering was accomplished with a proximity focused microchannel plate (MCP) image intensifier (MCPII) that incorporated a high strip current MCP (28 uA/sq.cm) and a LANL design modification for high-speed stripline gating geometry to provide both fast shuttering and high repetition rate capabilities. Later camera designs use a close-packed quadrupole head geometry fabricated using an array of four separate CCDs (pseudo 4-port device). This design provides four video outputs with optional parallel or time-phased sequential readout modes. Parallel readout exploits the full potential of both the CCD and MCPII with reduced performance whereas sequential readout permits 4X slower operation with improved performance by multiplexing, but requires individual shuttering of each CCD. The quad head format was designed with flexibility for coupling to various image intensifier configurations, including individual intensifiers for each CCD imager, a single intensifier with fiber optic or lens/prism coupled fanout of the input image to be shared by the four CCD imagers or a large diameter phosphor screen of a gateable framing type intensifier for time sequential relaying of a complete new input image to each CCD imager. Camera designs and their potential use in ongoing military and medical time-resolved imaging applications are discussed.

  6. Preliminary results from a single-photon imaging X-ray charge coupled device /CCD/ camera

    NASA Technical Reports Server (NTRS)

    Griffiths, R. E.; Polucci, G.; Mak, A.; Murray, S. S.; Schwartz, D. A.; Zombeck, M. V.

    1981-01-01

    A CCD camera is described which has been designed for single-photon X-ray imaging in the 1-10 keV energy range. Preliminary results are presented from the front-side illuminated Fairchild CCD 211, which has been shown to image well at 3 keV. The problem of charge-spreading above 4 keV is discussed by analogy with a similar problem at infrared wavelengths. The total system noise is discussed and compared with values obtained by other CCD users.

  7. ROPtool analysis of images acquired using a noncontact handheld fundus camera (Pictor)-a pilot study.

    PubMed

    Vickers, Laura A; Freedman, Sharon F; Wallace, David K; Prakalapakorn, S Grace

    2015-12-01

    The presence of plus disease is the primary indication for treatment of retinopathy of prematurity (ROP), but its diagnosis is subjective and prone to error. ROPtool is a semiautomated computer program that quantifies vascular tortuosity and dilation. Pictor is an FDA-approved, noncontact, handheld digital fundus camera. This pilot study evaluated ROPtool's ability to analyze high-quality Pictor images of premature infants and its accuracy in diagnosing plus disease compared to clinical examination. In our small sample of images, ROPtool could trace and identify the presence of plus disease with high accuracy. PMID:26691046

  8. The measurement of astronomical parallaxes with CCD imaging cameras on small telescopes

    SciTech Connect

    Ratcliff, S.J. ); Balonek, T.J. ); Marschall, L.A. ); DuPuy, D.L. ); Pennypacker, C.R. ); Verma, R. ); Alexov, A. ); Bonney, V. )

    1993-03-01

    Small telescopes equipped with charge-coupled device (CCD) imaging cameras are well suited to introductory laboratory exercises in positional astronomy (astrometry). An elegant example is the determination of the parallax of extraterrestrial objects, such as asteroids. For laboratory exercises suitable for introductory students, the astronomical hardware needs are relatively modest, and, under the best circumstances, the analysis requires little more than arithmetic and a microcomputer with image display capabilities. Results from the first such coordinated parallax observations of asteroids ever made are presented. In addition, procedures for several related experiments, involving single-site observations and/or parallaxes of earth-orbiting artificial satellites, are outlined.

  9. Development of a portable 3CCD camera system for multispectral imaging of biological samples.

    PubMed

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  10. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Herv; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  11. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  12. Performance of CID camera X-ray imagers at NIF in a harsh neutron environment

    SciTech Connect

    Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Piston, K. W.; Moody, J. D.; James, D. L.; Ness, R. A.; Haugh, M. J.; Lee, J. J.; Romano, E. D.

    2013-09-01

    Charge-injection devices (CIDs) are solid-state 2D imaging sensors similar to CCDs, but their distinct architecture makes CIDs more resistant to ionizing radiation.1–3 CID cameras have been used extensively for X-ray imaging at the OMEGA Laser Facility4,5 with neutron fluences at the sensor approaching 109 n/cm2 (DT, 14 MeV). A CID Camera X-ray Imager (CCXI) system has been designed and implemented at NIF that can be used as a rad-hard electronic-readout alternative for time-integrated X-ray imaging. This paper describes the design and implementation of the system, calibration of the sensor for X-rays in the 3 – 14 keV energy range, and preliminary data acquired on NIF shots over a range of neutron yields. The upper limit of neutron fluence at which CCXI can acquire useable images is ~ 108 n/cm2 and there are noise problems that need further improvement, but the sensor has proven to be very robust in surviving high yield shots (~ 1014 DT neutrons) with minimal damage.

  13. Application of adaptive optics in retinal imaging: a quantitative and clinical comparison with standard cameras

    NASA Astrophysics Data System (ADS)

    Barriga, E. S.; Erry, G.; Yang, S.; Russell, S.; Raman, B.; Soliz, P.

    2005-04-01

    Aim: The objective of this project was to evaluate high resolution images from an adaptive optics retinal imager through comparisons with standard film-based and standard digital fundus imagers. Methods: A clinical prototype adaptive optics fundus imager (AOFI) was used to collect retinal images from subjects with various forms of retinopathy to determine whether improved visibility into the disease could be provided to the clinician. The AOFI achieves low-order correction of aberrations through a closed-loop wavefront sensor and an adaptive optics system. The remaining high-order aberrations are removed by direct deconvolution using the point spread function (PSF) or by blind deconvolution when the PSF is not available. An ophthalmologist compared the AOFI images with standard fundus images and provided a clinical evaluation of all the modalities and processing techniques. All images were also analyzed using a quantitative image quality index. Results: This system has been tested on three human subjects (one normal and two with retinopathy). In the diabetic patient vascular abnormalities were detected with the AOFI that cannot be resolved with the standard fundus camera. Very small features, such as the fine vascular structures on the optic disc and the individual nerve fiber bundles are easily resolved by the AOFI. Conclusion: This project demonstrated that adaptive optic images have great potential in providing clinically significant detail of anatomical and pathological structures to the ophthalmologist.

  14. 200 ps FWHM and 100 MHz repetition rate ultrafast gated camera for optical medical functional imaging

    NASA Astrophysics Data System (ADS)

    Uhring, Wilfried; Poulet, Patrick; Hanselmann, Walter; Glazenborg, Ren; Zint, Virginie; Nouizi, Farouk; Dubois, Benoit; Hirschi, Werner

    2012-04-01

    The paper describes the realization of a complete optical imaging device to clinical applications like brain functional imaging by time-resolved, spectroscopic diffuse optical tomography. The entire instrument is assembled in a unique setup that includes a light source, an ultrafast time-gated intensified camera and all the electronic control units. The light source is composed of four near infrared laser diodes driven by a nanosecond electrical pulse generator working in a sequential mode at a repetition rate of 100 MHz. The resulting light pulses, at four wavelengths, are less than 80 ps FWHM. They are injected in a four-furcated optical fiber ended with a frontal light distributor to obtain a uniform illumination spot directed towards the head of the patient. Photons back-scattered by the subject are detected by the intensified CCD camera; there are resolved according to their time of flight inside the head. The very core of the intensified camera system is the image intensifier tube and its associated electrical pulse generator. The ultrafast generator produces 50 V pulses, at a repetition rate of 100 MHz and a width corresponding to the 200 ps requested gate. The photocathode and the Micro-Channel-Plate of the intensifier have been specially designed to enhance the electromagnetic wave propagation and reduce the power loss and heat that are prejudicial to the quality of the image. The whole instrumentation system is controlled by an FPGA based module. The timing of the light pulses and the photocathode gating is precisely adjustable with a step of 9 ps. All the acquisition parameters are configurable via software through an USB plug and the image data are transferred to a PC via an Ethernet link. The compactness of the device makes it a perfect device for bedside clinical applications.

  15. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.

  16. Linearisation of RGB camera responses for quantitative image analysis of visible and UV photography: a comparison of two techniques.

    PubMed

    Garcia, Jair E; Dyer, Adrian G; Greentree, Andrew D; Spring, Gale; Wilksch, Philip A

    2013-01-01

    Linear camera responses are required for recovering the total amount of incident irradiance, quantitative image analysis, spectral reconstruction from camera responses and characterisation of spectral sensitivity curves. Two commercially-available digital cameras equipped with Bayer filter arrays and sensitive to visible and near-UV radiation were characterised using biexponential and Bzier curves. Both methods successfully fitted the entire characteristic curve of the tested devices, allowing for an accurate recovery of linear camera responses, particularly those corresponding to the middle of the exposure range. Nevertheless the two methods differ in the nature of the required input parameters and the uncertainty associated with the recovered linear camera responses obtained at the extreme ends of the exposure range. Here we demonstrate the use of both methods for retrieving information about scene irradiance, describing and quantifying the uncertainty involved in the estimation of linear camera responses. PMID:24260244

  17. A Gaseous Compton Camera using a 2D-sensitive gaseous photomultiplier for Nuclear Medical Imaging

    NASA Astrophysics Data System (ADS)

    Azevedo, C. D. R.; Pereira, F. A.; Lopes, T.; Correia, P. M. M.; Silva, A. L. M.; Carramate, L. F. N. D.; Covita, D. S.; Veloso, J. F. C. A.

    2013-12-01

    A new Compton Camera (CC) concept based on a High Pressure Scintillation Chamber coupled to a position-sensitive Gaseous PhotoMultiplier for Nuclear Medical Imaging applications is proposed. The main goal of this work is to describe the development of a ?2512 cm3 cylindrical prototype, which will be suitable for scintimammography and for small-animal imaging applications. The possibility to scale it to an useful human size device is also in study. The idea is to develop a device capable to compete with the standard Anger Camera. Despite the large success of the Anger Camera, it still presents some limitations, such as: low position resolution and fair energy resolutions for 140 keV. The CC arises a different solution as it provides information about the incoming photon direction, avoiding the use of a collimator, which is responsible for a huge reduction (10-4) of the sensitivity. The main problem of the CC's is related with the Doppler Broadening which is responsible for the loss of angular resolution. In this work, calculations for the Doppler Broadening in Xe, Ar, Ne and their mixtures are presented. Simulations of the detector performance together with discussion about the gas choice are also included .

  18. The computation of cloud base height from paired whole-sky imaging cameras

    SciTech Connect

    Allmen, M.C.; Kegelmeyer, W.P. Jr.

    1994-03-01

    A major goal for global change studies is to improve the accuracy of general circulation models (GCMs) capable of predicting the timing and magnitude of greenhouse gas-induced global warming. Research has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climate responses to human activity. Of particular value to reducing the uncertainties associated with cloud-radiation interactions is the measurement of cloud base height (CBH), both because it is a dominant factor in determining the infrared radiative properties of clouds with respect to the earth`s surface and lower atmosphere and because CBHs are essential to measuring cloud cover fraction. We have developed a novel approach to the extraction of cloud base height from pairs of whole sky imaging (WSI) cameras. The core problem is to spatially register cloud fields from widely separated WSI cameras; this complete, triangulation provides the CBH measurements. The wide camera separation (necessary to cover the desired observation area) and the self-similarity of clouds defeats all standard matching algorithms when applied to static views of the sky. To address this, our approach is based on optical flow methods that exploit the fact that modern WSIs provide sequences of images. We will describe the algorithm and present its performance as evaluated both on real data validated by ceilometer measurements and on a variety of simulated cases.

  19. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system matched the clinical results. Digital image measurement of specimen deformation based on CCD cameras and Image J software has good perspective for application in biomechanical research, which has the advantage of simple optical setup, no-contact, high precision, and no special requirement of test environment.

  20. From the plenoptic camera to the flat integral-imaging display

    NASA Astrophysics Data System (ADS)

    Martnez-Corral, Manuel; Dorado, Adrin.; Navarro, Hctor; Llavador, Anabel; Saavedra, Genaro; Javidi, Bahram

    2014-06-01

    Plenoptic cameras capture a sampled version of the map of rays emitted by a 3D scene, commonly known as the Lightfield. These devices have been proposed for multiple applications as calculating different sets of views of the 3D scene, removing occlusions and changing the focused plane of the scene. They can also capture images that can be projected onto an integral-imaging monitor for display 3D images with full parallax. In this contribution, we have reported a new algorithm for transforming the plenoptic image in order to choose which part of the 3D scene is reconstructed in front of and behind the microlenses in the 3D display process.

  1. Dual-mode laparoscopic fluorescence image-guided surgery using a single camera

    PubMed Central

    Gray, Daniel C.; Kim, Evgenia M.; Cotero, Victoria E.; Bajaj, Anshika; Staudinger, V. Paul; Hehir, Cristina A. Tan; Yazdanfar, Siavash

    2012-01-01

    Iatrogenic nerve damage is a leading cause of morbidity associated with many common surgical procedures. Complications arising from these injuries may result in loss of function and/or sensation, muscle atrophy, and chronic neuropathy. Fluorescence image-guided surgery offers a potential solution for avoiding intraoperative nerve damage by highlighting nerves that are otherwise difficult to visualize. In this work we present the development of a single camera, dual-mode laparoscope that provides near simultaneous display of white-light and fluorescence images of nerves. The capability of the instrumentation is demonstrated through imaging several types of in situ rat nerves via a nerve specific contrast agent. Full color white light and high brightness fluorescence images and video of nerves as small as 100 m in diameter are presented. PMID:22876351

  2. Design and realization of an image mosaic system on the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Wang, Peng; Zhu, Hai bin; Li, Yan; Zhang, Shao jun

    2015-08-01

    It has long been difficulties in aerial photograph to stitch multi-route images into a panoramic image in real time for multi-route flight framing CCD camera with very large amount of data, and high accuracy requirements. An automatic aerial image mosaic system based on GPU development platform is described in this paper. Parallel computing of SIFT feature extraction and matching algorithm module is achieved by using CUDA technology for motion model parameter estimation on the platform, which makes it's possible to stitch multiple CCD images in real-time. Aerial tests proved that the mosaic system meets the user's requirements with 99% accuracy and 30 to 50 times' speed improvement of the normal mosaic system.

  3. Digital tomographic imaging with time-modulated pseudorandom coded aperture and Anger camera.

    PubMed

    Koral, K F; Rogers, W L; Knoll, G F

    1975-05-01

    The properties of a time-modulated pseudorandom coded aperture with digital reconstruction are compared with those of conventional collimators used in gamma-ray imaging. The theory of this coded aperture is given and the signal-to-noise ratio in an element of the reconstructed image is shown to depend on the entire source distribution. Experimental results with a preliminary 4 X 4-cm pseudorandom coded aperture and an Anger camera are presented. These results include phantom and human thyroid images and tomographic images of a rat bone scan. The experimental realization of the theoretical advantages of the time-modulated coded aperture gives reason for continuing the clinical implementation and further development of the method. PMID:1194994

  4. High performance gel imaging with a commercial single lens reflex camera

    NASA Astrophysics Data System (ADS)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  5. Conceptual design of a camera system for neutron imaging in low fusion power tokamaks

    NASA Astrophysics Data System (ADS)

    Xie, X.; Yuan, X.; Zhang, X.; Nocente, M.; Chen, Z.; Peng, X.; Cui, Z.; Du, T.; Hu, Z.; Li, T.; Fan, T.; Chen, J.; Li, X.; Zhang, G.; Yuan, G.; Yang, J.; Yang, Q.

    2016-02-01

    The basic principles for designing a camera system for neutron imaging in low fusion power tokamaks are illustrated for the case of the HL-2A tokamak device. HL-2A has an approximately circular cross section, with total neutron yields of about 1012 n/s under 1 MW neutral beam injection (NBI) heating. The accuracy in determining the width of the neutron emission profile and the plasma vertical position are chosen as relevant parameters for design optimization. Typical neutron emission profiles and neutron energy spectra are calculated by Monte Carlo method. A reference design is assumed, for which the direct and scattered neutron fluences are assessed and the neutron count profile of the neutron camera is obtained. Three other designs are presented for comparison. The reference design is found to have the best performance for assessing the width of peaked to broadened neutron emission profiles. It also performs well for the assessment of the vertical position.

  6. MegaCam: the new Canada-France-Hawaii Telescope wide-field imaging camera

    NASA Astrophysics Data System (ADS)

    Boulade, Olivier; Charlot, Xavier; Abbon, P.; Aune, Stephan; Borgeaud, Pierre; Carton, Pierre-Henri; Carty, M.; Da Costa, J.; Deschamps, H.; Desforge, D.; Eppell, Dominique; Gallais, Pascal; Gosset, L.; Granelli, Remy; Gros, Michel; de Kat, Jean; Loiseau, Denis; Ritou, J.-.; Rouss, Jean Y.; Starzynski, Pierre; Vignal, Nicolas; Vigroux, Laurent G.

    2003-03-01

    MegaCam is an imaging camera with a 1 square degree field of view for the new prime focus of the 3.6 meter Canada-France-Hawaii Telescope. This instrument will mainly be used for large deep surveys ranging from a few to several thousands of square degrees in sky coverage and from 24 to 28.5 in magnitude. The camera is built around a CCD mosaic approximately 30 cm square, made of 40 large thinned CCD devices for a total of 20 K x 18 K pixels. It uses a custom CCD controller, a closed cycle cryocooler based on a pulse tube, a 1 m diameter half-disk as a shutter, a juke-box for the selection of the filters, and programmable logic controllers and fieldbus network to control the different subsystems. The instrument was delivered to the observatory on June 10, 2002 and first light is scheduled in early October 2002.

  7. Preliminary Monte Carlo study of coded aperture imaging with a CZT gamma camera system for scintimammography

    NASA Astrophysics Data System (ADS)

    Alnafea, M.; Wells, K.; Spyrou, N. M.; Guy, M.

    2007-04-01

    A solid-state Cadmium Zinc Telluride (CZT) gamma camera in conjunction with a Modified Uniformly Redundant Arrays (MURAs) Coded Aperture (CA) scintimammography (SM) system has been investigated using Monte Carlo simulation. The motivation is to utilise the enhanced energy resolution of CZT detectors compared to standard scintillation-based gamma cameras for scatter rejection. The effects of variations in lesion sizes and tumour-to-background-ratio were simulated in a 3D phantom geometry. Despite the enhanced energy resolution, we find that the open field geometry associated with the MURA CA imaging nonetheless requires shielding from non-specific background tracer uptake, and correction for out-of-plane activity. We find that a TBR of 20:1 is required for visualising a 10 mm wide lesion.

  8. Development of mid-wavelength and long-wavelength megapixel portable QWIP imaging cameras

    NASA Astrophysics Data System (ADS)

    Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Hill, C. J.; Rafol, S. B.; Mumolo, J. M.; Trinh, J. T.; Tidrow, M. Z.; LeVan, P. D.

    2005-10-01

    Mid-wavelength infrared (MWIR) and long-wavelength infrared (LWIR) megapixel quantum well infrared photodetector (QWIP) focal plane arrays have been demonstrated with excellent imaging performance. The MWIR detector array has shown noise equivalent temperature difference (NETD) of 17 mK at 95 K operating temperature with f/2.5 optics at 300 K background and the LWIR detector array has given NETD of 13 m K at 70 K operating temperature with the same optical and background conditions as the MWIR array. Two portable prototype infrared cameras were fabricated using these two focal planes. The MWIR and the LWIR prototype cameras with similar optics have shown background limited performance (BLIP) at 90 K and 70 K operating temperatures respectively, at 300 K background. In this paper, we will discuss their performance in quantum efficiency, NETD, uniformity, and operability.

  9. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H.; McCurnin, T.W.; Sanchez, P.G.

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  10. Loop closure detection by algorithmic information theory: implemented on range and camera image data.

    PubMed

    Ravari, Alireza Norouzzadeh; Taghirad, Hamid D

    2014-10-01

    In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363

  11. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  12. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation.

    PubMed

    Kazmi, S M Shams; Balial, Satyajit; Dunn, Andrew K

    2014-07-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  13. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation

    PubMed Central

    Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.

    2014-01-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  14. First responder thermal imaging cameras: development of performance metrics and test methods

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Hamins, Anthony

    2006-05-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently, there are no standardized performance metrics or test methods available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory at the National Institute of Standards and Technology is developing performance evaluation techniques that combine aspects of conventional metrics such as the contrast transfer function (CTF), the minimum resolvable temperature difference (MRTD), and noise equivalent temperature difference (NETD) with test methods that accommodate the special conditions in which first responders use these instruments. First responders typically use thermal imagers when their vision is obscured due to the presence of smoke, dust, fog, and/or the lack of visible light, and in cases when the ambient temperature is uncomfortably hot. Testing has shown that image contrast, as measured using a CTF calculation, suffers when a target is viewed through obscuring media. A proposed method of replacing the trained observer required for the conventional MRTD test method with a CTF calculation is presented. A performance metric that combines thermal resolution with target temperature and sensitivity mode shifts is also being investigated. Results of this work will support the establishment of standardized performance metrics and test methods for thermal imaging cameras that are meaningful to the first responders that use them.

  15. Contactless multiple wavelength photoplethysmographic imaging: a first step toward "SpO2 camera" technology.

    PubMed

    Wieringa, F P; Mastik, F; van der Steen, A F W

    2005-08-01

    We describe a route toward contactless imaging of arterial oxygen saturation (SpO2) distribution within tissue, based upon detection of a two-dimensional matrix of spatially resolved optical plethysmographic signals at different wavelengths. As a first step toward SpO2-imaging we built a monochrome CMOS-camera with apochromatic lens and 3lambda-LED-ringlight (lambda1 = 660 nm, lambda2 = 810 nm, lambda3 = 940 nm; 100 LEDs lambda(-1)). We acquired movies at three wavelengths while simultaneously recording ECG and respiration for seven volunteers. We repeated this experiment for one volunteer at increased frame rate, additionally recording the pulse wave of a pulse oximeter. Movies were processed by dividing each image frame into discrete Regions of Interest (ROIs), averaging 10 x 10 raw pixels each. For each ROI, pulsatile variation over time was assigned to a matrix of ROI-pixel time traces with individual Fourier spectra. Photoplethysmograms correlated well with respiration reference traces at three wavelengths. Increased frame rates revealed weaker pulsations (main frequency components 0.95 and 1.9 Hz) superimposed upon respiration-correlated photoplethysmograms, which were heartbeat-related at three wavelengths. We acquired spatially resolved heartbeat-related photoplethysmograms at multiple wavelengths using a remote camera. This feasibility study shows potential for non-contact 2-D imaging reflection-mode pulse oximetry. Clinical devices, however, require further development. PMID:16133912

  16. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  17. Full 3-D cluster-based iterative image reconstruction tool for a small animal PET camera

    NASA Astrophysics Data System (ADS)

    Valastyn, I.; Imrek, J.; Molnr, J.; Novk, D.; Balkay, L.; Emri, M.; Trn, L.; Bkki, T.; Kerek, A.

    2007-02-01

    Iterative reconstruction methods are commonly used to obtain images with high resolution and good signal-to-noise ratio in nuclear imaging. The aim of this work was to develop a scalable, fast, cluster based, fully 3-D iterative image reconstruction package for our small animal PET camera, the miniPET. The reconstruction package is developed to determine the 3-D radioactivity distribution from list mode type of data sets and it can also simulate noise-free projections of digital phantoms. We separated the system matrix generation and the fully 3-D iterative reconstruction process. As the detector geometry is fixed for a given camera, the system matrix describing this geometry is calculated only once and used for every image reconstruction, making the process much faster. The Poisson and the random noise sensitivity of the ML-EM iterative algorithm were studied for our small animal PET system with the help of the simulation and reconstruction tool. The reconstruction tool has also been tested with data collected by the miniPET from a line and a cylinder shaped phantom and also a rat.

  18. Superresolution imaging system on innovative localization microscopy technique with commonly using dyes and CMOS camera

    NASA Astrophysics Data System (ADS)

    Dudenkova, V.; Zakharov, Yu.

    2015-05-01

    Optical methods for study biological tissue and cell at micro- and nanoscale level step now over diffraction limit. Really it is single molecule localization techniques that achieve the highest spatial resolution. One of those techniques, called bleaching/blinking assisted localization microscopy (BaLM) relies on the intrinsic bleaching and blinking behavior characteristic of commonly used fluorescent probes. This feature is the base of BaLM image series acquisition and data analysis. In our work blinking of single fluorescent spot against a background of others comes to light by subtraction of time series successive frames. Then digital estimation gives the center of the spot as a point of fluorescent molecule presence, which transfers to other image with higher resolution according to accuracy of the center localization. It is a part of image with improved resolution. This approach allows overlapping fluorophores and not requires single photon sensitivity, so we use 8,8 megapixel CMOS camera with smallest (1.55 um) pixel size. This instrumentation on the base of Zeiss Axioscope 2 FS MOT allows image transmission from object plane to matrix on a scale less than 100 nm/pixel using 20x-objective, thereafter the same resolution and 5 times more field of view as compared to EMCCD camera with 6 um pixel size. To optimize excitation light power, frame rate and gain of camera we have made appropriate estimations taking into account fluorophores behaviors features and equipment characteristics. Finely we have clearly distinguishable details of the sample in the processed field of view.

  19. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from ?40 C to 20 C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (1616 array, 8 mm 8 mm area, 400 ?m pixel size) at different temperatures was evaluated. Comparison of image quality was made at ?25 C and 5 C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 C degraded in comparision to the reconstructed image quality at ?25 C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  20. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    NASA Astrophysics Data System (ADS)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the safe operation of these devices is still an issue, certainly when flying on locations which can be crowded (such as students on excavations or tourists walking around historic places). As the future of UAS regulation remains unclear, this talk presents an alternative approach to aerial imaging: the Fotokite. Developed at the ETH Zürich, the Fotokite is a tethered flying camera that is essentially a multi-copter connected to the ground with a taut tether to achieve controlled flight. Crucially, it relies solely on onboard IMU (Inertial Measurement Unit) measurements to fly, launches in seconds, and is classified as not a UAS (Unmanned Aerial System), e.g. in the latest FAA (Federal Aviation Administration) UAS proposal. As a result it may be used for imaging cultural heritage in a variety of environments and settings with minimal training by non-experienced pilots. Furthermore, it is subject to less extensive certification, regulation and import/export restrictions, making it a viable solution for use at a greater range of sites than traditional methods. Unlike a balloon or a kite it is not subject to particular weather conditions and, thanks to active stabilization, is capable of a variety of intelligent flight modes. Finally, it is compact and lightweight, making it easy to transport and deploy, and its lack of reliance on GNSS (Global Navigation Satellite System) makes it possible to use in urban, overbuilt areas. After outlining its operating principles, the talk will present some archaeological case studies in which the Fotokite was used, hereby assessing its capabilities compared to the conventional UAS's on the market.

  1. Toward Real-time quantum imaging with a single pixel camera

    SciTech Connect

    Lawrie, Benjamin J; Pooser, Raphael C

    2013-01-01

    We present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively transmit macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. In low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imaging with sensitivity below the photon shot noise limit.

  2. Noise modeling and estimation in image sequences from thermal infrared cameras

    NASA Astrophysics Data System (ADS)

    Alparone, Luciano; Corsini, Giovanni; Diani, Marco

    2004-11-01

    In this paper we present an automated procedure devised to measure noise variance and correlation from a sequence, either temporal or spectral, of digitized images acquired by an incoherent imaging detector. The fundamental assumption is that the noise is signal-independent and stationary in each frame, but may be non-stationary across the sequence of frames. The idea is to detect areas within bivariate scatterplots of local statistics, corresponding to statistically homogeneous pixels. After that, the noise PDF, modeled as a parametric generalized Gaussian function, is estimated from homogeneous pixels. Results obtained applying the noise model to images taken by an IR camera operated in different environmental conditions are presented and discussed. They demonstrate that the noise is heavy-tailed (tails longer than those of a Gaussian PDF) and spatially autocorrelated. Temporal correlation has been investigated as well and found to depend on the frame rate and, by a small extent, on the wavelength of the thermal radiation.

  3. First experience DaTSCAN imaging using cadmium-zinc-telluride gamma camera SPECT.

    PubMed

    Farid, Karim; Queneau, Mathieu; Guernou, Mohamed; Lussato, David; Poullias, Xavier; Petras, Slavomir; Caillat-Vigneron, Nadine; Songy, Bernard

    2012-08-01

    We report our first experience of brain DaTSCAN SPECT imaging using cadmium-zinc-telluride gamma camera (CZT-GC) in 2 cases: a 64-year-old patient suffering from essential tremor and a 73-year-old patient presenting with atypical bilateral extrapyramidal syndrome. In both cases, 2 different acquisitions were performed and compared, using a double-head Anger-GC, followed immediately by a second acquisition on CZT-GC. There were no significant visual differences between images generated by different GC. Our first result suggests that DaTSCAN SPECT is feasible on CZT-GC, allowing both injected dose and acquisition time reductions without compromising image quality. This experience needs to be evaluated in larger series. PMID:22785531

  4. MEMS-based thermally-actuated image stabilizer for cellular phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%.

  5. Measuring multivariate subjective image quality for still and video cameras and image processing system components

    NASA Astrophysics Data System (ADS)

    Nyman, Göte; Leisti, Tuomas; Lindroos, Paul; Radun, Jenni; Suomi, Sini; Virtanen, Toni; Olives, Jean-Luc; Oja, Joni; Vuori, Tero

    2008-01-01

    The subjective quality of an image is a non-linear product of several, simultaneously contributing subjective factors such as the experienced naturalness, colorfulness, lightness, and clarity. We have studied subjective image quality by using a hybrid qualitative/quantitative method in order to disclose relevant attributes to experienced image quality. We describe our approach in mapping the image quality attribute space in three cases: still studio image, video clips of a talking head and moving objects, and in the use of image processing pipes for 15 still image contents. Naive observers participated in three image quality research contexts in which they were asked to freely and spontaneously describe the quality of the presented test images. Standard viewing conditions were used. The data shows which attributes are most relevant for each test context, and how they differentiate between the selected image contents and processing systems. The role of non-HVS based image quality analysis is discussed.

  6. Evaluation of a multistage CdZnTe Compton camera for prompt ? imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    McCleskey, M.; Kaye, W.; Mackin, D. S.; Beddar, S.; He, Z.; Polf, J. C.

    2015-06-01

    A new detector system, Polaris J from H3D, has been evaluated for its potential application as a Compton camera (CC) imaging device for prompt ? rays (PGs) emitted during proton radiation therapy (RT) for the purpose of dose range verification. This detector system consists of four independent CdZnTe detector stages and a coincidence module, allowing the user to construct a Compton camera in different geometrical configurations and to accept both double and triple scatter events. Energy resolution for the 662 keV line from 137Cs was found to be 9.7 keV FWHM. The raw absolute efficiencies for double and triple scatter events were 2.2 10-5 and 5.8 10-7, respectively, for ?s from a 60Co source. The position resolution for the reconstruction of a point source from the measured CC data was about 2 mm. Overall, due to the low efficiency of the Polaris J CC, the current system was deemed not viable for imaging PGs emitted during proton RT treatment delivery. However, using a validated Monte Carlo model of the CC, we found that by increasing the size of the detectors and placing them in a two stage configuration, the efficiency could be increased to a level to make PG imaging possible during proton RT.

  7. SWIR Geiger-mode APD detectors and cameras for 3D imaging

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Krishnamachari, Uppili; Owens, Mark; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2014-06-01

    The operation of avalanche photodiodes in Geiger mode by arming these detectors above their breakdown voltage provides high-performance single photon detection in a robust solid-state device platform. Moreover, these devices are ideally suited for integration into large format focal plane arrays enabling single photon imaging. We describe the design and performance of short-wave infrared 3D imaging cameras with focal plane arrays (FPAs) based on Geigermode avalanche photodiodes (GmAPDs) with single photon sensitivity for laser radar imaging applications. The FPA pixels incorporate InP/InGaAs(P) GmAPDs for the detection of single photons with high efficiency and low dark count rates. We present results and attributes of fully integrated camera sub-systems with 32 32 and 128 32 formats, which have 100 ?m pitch and 50 ?m pitch, respectively. We also address the sensitivity of the fundamental GmAPD detectors to radiation exposure, including recent results that correlate detector active region volume to sustainable radiation tolerance levels.

  8. Static laser speckle contrast analysis for noninvasive burn diagnosis using a camera-phone imager.

    PubMed

    Ragol, Sigal; Remer, Itay; Shoham, Yaron; Hazan, Sivan; Willenz, Udi; Sinelnikov, Igor; Dronov, Vladimir; Rosenberg, Lior; Bilenca, Alberto

    2015-08-01

    Laser speckle contrast analysis (LASCA) is an established optical technique for accurate widefield visualization of relative blood perfusion when no or minimal scattering from static tissue elements is present, as demonstrated, for example, in LASCA imaging of the exposed cortex. However, when LASCA is applied to diagnosis of burn wounds, light is backscattered from both moving blood and static burn scatterers, and thus the spatial speckle contrast includes both perfusion and nonperfusion components and cannot be straightforwardly associated to blood flow. We extract from speckle contrast images of burn wounds the nonperfusion (static) component and discover that it conveys useful information on the ratio of static-to-dynamic scattering composition of the wound, enabling identification of burns of different depth in a porcine model in vivo within the first 48 h postburn. Our findings suggest that relative changes in the static-to-dynamic scattering composition of burns can dominate relative changes in blood flow for burns of different severity. Unlike conventional LASCA systems that employ scientific or industrial-grade cameras, our LASCA system is realized here using a camera phone, showing the potential to enable LASCA-based burn diagnosis with a simple imager. PMID:26271055

  9. Computer-assisted liver-mass estimation from gamma-camera images.

    PubMed

    Eikman, E A; Mack, G A; Jain, V K; Madden, J A

    1979-02-01

    We have devised a computer-assisted method for objective estimation of liver mass from the right lateral projection of radiocolloid images of the liver. Gamma-camera images were digitized, preprocessed, and stored in computer memory. The definition of liver for area measurement was adaptively determined by means of a Laplacian operator that measures change in radioactivity slope associated with the liver margin. Individual thresholds were calculated for each of 16 subregions. A liver-mass index was derived from a linear regression model correlating the area of the right lateral projection with liver weight at autopsy in 50 patients whose livers weighed between 0.8 to 3.0 Kg. The correlation coefficient found for this method was 0.83 using the equation: Liver Mass [kg] = Area [cm2]/275 [kg/cm2]--0.215 [kg]. Liver-mass estimates using an alternative computer-assisted method or representative manual methods adapted for gamma-camera images showed lower correlation with liver weight at autopsy. PMID:372502

  10. Intensified array camera imaging of solid surface combustion aboard the NASA Learjet

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1992-01-01

    An intensified array camera was used to image weakly luminous flames spreading over thermally thin paper samples in a low gravity environment aboard the NASA-Lewis Learjet. The aircraft offers 10 to 20 sec of reduced gravity during execution of a Keplerian trajectory and allows the use of instrumentation that is delicate or requires higher electrical power than is available in drop towers. The intensified array camera is a charge intensified device type that responds to light between 400 and 900 nm and has a minimum sensitivity of 10(exp 6) footcandles. The paper sample, either ashless filter paper or a lab wiper, burns inside a sealed chamber which is filled with 21, 18, or 15 pct. oxygen in nitrogen at one atmosphere. The camera views the edge of the paper and its output is recorded on videotape. Flame positions are measured every 0.1 sec to calculate flame spread rates. Comparisons with drop tower data indicate that the flame shapes and spread rates are affected by the residual g level in the aircraft.

  11. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including humancomputer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  12. Fast high-resolution characterization of powders using an imaging plate Guinier camera

    NASA Astrophysics Data System (ADS)

    Gal, Joseph; Mogilanski, Dmitry; Nippus, Michael; Zabicky, Jacob; Kimmel, Giora

    2005-10-01

    A new Huber Guinier camera G670 was installed on an Ultrax18-Rigaku X-ray rotating Cu anode source, with a monochromator (focal length B=360 mm) providing pure Kα 1 radiation. The camera is used for powder diffraction applying transmission geometry. An imaging plate (IP) brings about position-sensitive detection measurement. In order to evaluate this new instrumental setup, quality data were collected for some classical reference materials such as silicon, quartz, some standards supplied by NIST USA and ceramic oxides synthesized in our laboratory. Each sample was measured at 4 kW for 1-2 min at 2 θ from 0 to 100°. The results were compared with published references. The following desirable features are noted for the instrumental combination studied: production of high quality X-ray data at a very fast rate, very accurate intensity measurements, sharp diffraction patterns due to small instrumental broadening and a pure monochromatic beam, and small position errors for 2 θ from 4 to 80°. There is no evidence for extra line broadening by the IP camera detector setup. It was found that the relatively high instrumental background can be easily dealt with and does not pose difficulty in the analysis of the data. However, fluorescence cannot be filtered.

  13. Correlating objective and subjective evaluation of texture appearance with applications to camera phone imaging

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.

    2009-01-01

    Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.

  14. Evaluation of a large format image tube camera for the shuttle sortie mission

    NASA Technical Reports Server (NTRS)

    Tifft, W. C.

    1976-01-01

    A large format image tube camera of a type under consideration for use on the Space Shuttle Sortie Missions is evaluated. The evaluation covers the following subjects: (1) resolving power of the system (2) geometrical characteristics of the system (distortion etc.) (3) shear characteristics of the fiber optic coupling (4) background effects in the tube (5) uniformity of response of the tube (as a function of wavelength) (6) detective quantum efficiency of the system (7) astronomical applications of the system. It must be noted that many of these characteristics are quantitatively unique to the particular tube under discussion and serve primarily to suggest what is possible with this type of tube.

  15. Plume Imaging Using an IR Camera to Estimate Sulphur Dioxide Flux on Volcanoes of Northern Chile

    NASA Astrophysics Data System (ADS)

    Rosas Sotomayor, F.; Amigo, A.

    2014-12-01

    Remote sensing is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult or during volcanic crisis. In recent years, a ground-based infrared camera (NicAir) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. NicAir cameras have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. This contribution focuses on series of measurements done in December 2013 in volcanoes of northern Chile, in particular Lscar, Irruputuncu and Ollage, which are characterized by persistent quiescent degassing. During fieldwork, plumes from all three volcanoes showed regular behavior and the atmospheric conditions were very favorable (cloud-free and dry air). Four, two and one sets of measurements, up to 100 images each, were taken for Lscar, Irruputuncu and Ollage volcano, respectively. Matlab software was used for image visualizing and processing of the raw data. For instance, data visualization is performed through Matlab IPT functions imshow() and imcontrast(), and one algorithm was created for extracting necessary metadata. Image processing considers radiation at 8.6 and 10 ?m wavelengths, due to differential SO2 and water vapor absorption. Calibration was performed in the laboratory through a detector correlation between digital numbers (raw data in image pixel values) and spectral radiance, and also in the field considering the camera self-emissions of infrared radiation. A gradient between the plume core and plume rim is expected, due to quick reaction of sulphur dioxide with water vapor, therefore a flux underestimate is also expected. Results will be compared with other SO2 remote sensing instruments such as DOAS and UV-camera. The implementation of this novel technique in Chilean volcanoes will be a major advance in our understanding of volcanic emissions and is also a strong complement for gas monitoring in active volcanoes such as Lscar, Villarrica, Lastarria, Cordn Caulle, among others and in rough volcanic terrains, due to its portability, easy operation, fast data acquisition and data processing.

  16. Planetary Camera imaging of the counter-rotating core galaxy NGC 4365

    NASA Technical Reports Server (NTRS)

    Forbes, Duncan A.

    1994-01-01

    We analyze F555W(V) band Planetary Camera images of NGC 4365, for which ground-based spectroscopy has revealed a misaligned, counter-rotating core. Line profile analysis by Surma indicates that the counter-rotating component has a disk structure. After deconvolution and galaxy modeling, we find photometric evidence, at small radii to support this claim. There is no indication of a central point source or dust lane. The surface brightness profile reveals a steep outer profile and shallow, by not flat, inner profile with the inflection radius occurring at 1.8 sec. The inner profile is consistent with a cusp.

  17. Development of wide-field, multi-imaging x-ray streak camera technique with increased image-sampling arrays

    NASA Astrophysics Data System (ADS)

    Heya, M.; Fujioka, S.; Shiraga, H.; Miyanaga, N.; Yamanaka, T.

    2001-01-01

    In order to enlarge the field of view of a multi-imaging x-ray streak (MIXS) camera technique [H. Shiraga et al., Rev. Sci. Instrum. 66, 722 (1995)], which provides two-dimensionally space-resolved x-ray imaging with a high temporal resolution of 10 ps, we have proposed and designed a wide-field MIXS (W-MIXS) by increasing the number of image-sampling arrays. In this method, multiple cathode slits were used on the photocathode of an x-ray streak camera. The field of view of the W-MIXS can be enlarged up to 150-200 ?m instead of 70 ?m for a typical MIXS with a spatial resolution of 15 ?m. A proof-of-principle experiment with the W-MIXS was carried out at the Gekko-XII laser system. A cross-wire target was irradiated by four beams of the Gekko-XII laser. The data streaked with the W-MIXS system were reconstructed as a series of time-resolved, two-dimensional x-ray images. The W-MIXS system has been established as an improved two-dimensionally space-resolved and sequentially time-resolved technique.

  18. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-Garca, Angel E.; Lazaro, Jos Luis; Infante, Arturo; Fernndez, Pedro; Pompa-Chacn, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  19. New Mars Camera's First Image of Mars from Mapping Orbit (Full Frame)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The high resolution camera on NASA's Mars Reconnaissance Orbiter captured its first image of Mars in the mapping orbit, demonstrating the full resolution capability, on Sept. 29, 2006. The High Resolution Imaging Science Experiment (HiRISE) acquired this first image at 8:16 AM (Pacific Time). With the spacecraft at an altitude of 280 kilometers (174 miles), the image scale is 25 centimeters per pixel (10 inches per pixel). If a person were located on this part of Mars, he or she would just barely be visible in this image.

    The image covers a small portion of the floor of Ius Chasma, one branch of the giant Valles Marineris system of canyons. The image illustrates a variety of processes that have shaped the Martian surface. There are bedrock exposures of layered materials, which could be sedimentary rocks deposited in water or from the air. Some of the bedrock has been faulted and folded, perhaps the result of large-scale forces in the crust or from a giant landslide. The image resolves rocks as small as small as 90 centimeters (3 feet) in diameter. It includes many dunes or ridges of windblown sand.

    This image (TRA_000823_1720) was taken by the High Resolution Imaging Science Experiment camera onboard the Mars Reconnaissance Orbiter spacecraft on Sept. 29, 2006. Shown here is the full image, centered at minus 7.8 degrees latitude, 279.5 degrees east longitude. The image is oriented such that north is to the top. The range to the target site was 297 kilometers (185.6 miles). At this distance the image scale is 25 centimeters (10 inches) per pixel (with one-by-one binning) so objects about 75 centimeters (30 inches) across are resolved. The image was taken at a local Mars time of 3:30 PM and the scene is illuminated from the west with a solar incidence angle of 59.7 degrees, thus the sun was about 30.3 degrees above the horizon. The season on Mars is northern winter, southern summer.

    [Photojournal note: Due to the large sizes of the high-resolution TIFF and JPEG files, some systems may experience extremely slow downlink time while viewing or downloading these images; some systems may be incapable of handling the download entirely.]

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The HiRISE camera was built by Ball Aerospace & Technologies Corporation, Boulder, Colo., and is operated by the University of Arizona, Tucson.

  20. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  1. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue Lamassu. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  2. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  3. Efficient smart CMOS camera based on FPGAs oriented to embedded image processing.

    PubMed

    Bravo, Ignacio; Balias, Javier; Gardel, Alfredo; Lzaro, Jos L; Espinosa, Felipe; Garca, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  4. Spectral imaging with a cid camera. Final report, 4 February 1982-14 September 1984

    SciTech Connect

    Tarbell, T.D.

    1985-03-22

    This report describes a program of spectral imaging observations of the solar atmosphere using the Sacromento Peak Vacuum Tower Telescope. The observations were obtained with Lockheed instruments including: an active tilt mirror for image motion compensation; polarization analyzer; narrowband tunable birefringent filter; photoelectric cid array camera; digital video image processor; and a microcomputer controller. Five observing runs were made, three of them with the entire system in operation. The images obtained were processed to measure magnetic and velocity fields in the solar photosphere with very high spatial resolution - 0.5 arcseconds on the best frames. Sets of these images have been studied to address three scientific problems; (1) The relationship among small magnetic flux tubes, downdrafts and granulation; (2) The puzzling flux changes in isolated magnetic features in a decaying active region; (3) The temporal power spectrum of the magnetogram signal in isolated flux tubes. Examples of the narrowband images are included in the report, along with abstracts and manuscripts of papers, resulting from this research.

  5. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  6. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw; Umeno, Marc M.

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  7. High-resolution imaging of the Pluto-Charon system with the Faint Object Camera of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.

    1994-01-01

    Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.

  8. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  9. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  10. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  11. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  12. Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror.

    PubMed

    Li, Weiming; Li, Y F

    2011-03-28

    This paper presents a panoramic stereo imaging system which uses a single camera coaxially combined with a fisheye lens and a convex mirror. It provides the design methodology, trade analysis, and experimental results using commercially available components. The trade study shows the design equations and the various tradeoffs that must be made during design. The system's novelty is that it provides stereo vision over a full 360-degree horizontal field-of-view (FOV). Meanwhile, the entire vertical FOV is enlarged compared to the existing systems. The system is calibrated with a computational model that can accommodate the non-single viewpoint imaging cases to conduct 3D reconstruction in Euclidean space. PMID:21451610

  13. Real time plume and laser spot recognition in IR camera images

    SciTech Connect

    Moore, K.R.; Caffrey, M.P.; Nemzek, R.J.; Salazar, A.A.; Jeffs, J.; Andes, D.K.; Witham, J.C.

    1997-08-01

    It is desirable to automatically guide the laser spot onto the effluent plume for maximum IR DIAL system sensitivity. This requires the use of a 2D focal plane array. The authors have demonstrated that a wavelength-filtered IR camera is capable of 2D imaging of both the plume and the laser spot. In order to identify the centers of the plume and the laser spot, it is first necessary to segment these features from the background. They report a demonstration of real time plume segmentation based on velocity estimation. They also present results of laser spot segmentation using simple thresholding. Finally, they describe current research on both advanced segmentation and recognition algorithms and on reconfigurable real time image processing hardware based on field programmable gate array technology.

  14. A JPEG-like algorithm for compression of single-sensor camera image

    NASA Astrophysics Data System (ADS)

    Benahmed Daho, Omar; Larabi, Mohamed-Chaker; Mukhopadhyay, Jayanta

    2011-01-01

    This paper presents a JPEG-like coder for image compression of single-sensor camera images using a Bayer Color Filter Array (CFA). The originality of the method is a joint scheme of compression.demosaicking in the DCT domain. In this method, the captured CFA raw data is first separated in four distinct components and then converted to YCbCr. A JPEG compression scheme is then applied. At the decoding level, the bitstream is decompressed until reaching the DCT coefficients. These latter are used for the interpolation stage. The obtained results are better than those obtained by the conventional JPEG in terms of CPSNR, ?E2000 and SSIM. The obtained JPEG-like scheme is also less complex.

  15. High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Elgner, S.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2013-09-01

    The Dawn Framing Camera (FC) acquired close to 10,000 clear filter images of Vesta with a resolution of about 20 m/pixel during the Low Altitude Mapping Orbit (LAMO) between December 2011 and April 2012. We ortho-rectified these images and produced a global high-resolution uncontrolled mosaic of Vesta. This global mosaic is the baseline for a high-resolution Vesta atlas that consists of 30 tiles mapped at a scale between 1:200,000 and 1:225,180. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The whole atlas is available to the public through the Dawn GIS web page [http://dawn_gis.dlr.de/atlas].

  16. Method for searching the mapping relationship between space points and their image points in CCD camera

    NASA Astrophysics Data System (ADS)

    Sun, Yuchen; Ge, Baozhen; Lu, Qieni; Zou, Jin; Zhang, Yimo

    2005-01-01

    BP Neural Network Method and Linear Partition Method are proposed to search the mapping relationship between space points and their image points in CCD cameras, which can be adopted to calibrate three-dimensional digitization systems based on optical method. Both of the methods only need the coordinates of calibration points and their corresponding image points" coordinates as parameters. The principle of the calibration techniques includes the formula and solution procedure is deduced in detail. Calibration experiment results indicate that the use of Linear Partition Method to coplanar points enables its measuring mean relative error to reach 0.44 percent and the use of BP Neural Network Method to non-coplanar points enables its testing accuracy to reach 0.5-0.6 pixels.

  17. Design and fabrication of MEMS-based thermally-actuated image stabilizer for cell phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    A micro-electro-mechanical system (MEMS)-based image stabilizer is proposed to counteracting shaking in cell phone cameras. The proposed stabilizer (dimensions, 8.8 8.8 0.2 mm3) includes a two-axis decoupling XY stage and has sufficient strength to suspend an image sensor (IS) used for anti-shaking function. The XY stage is designed to send electrical signals from the suspended IS by using eight signal springs and 24 signal outputs. The maximum actuating distance of the stage is larger than 25 ?m, which is sufficient to resolve the shaking problem. Accordingly, the applied voltage for the 25 ?m moving distance is lower than 20 V; the dynamic resonant frequency of the actuating device is 4485 Hz, and the rising time is 21 ms.

  18. Color video camera capable of 1,000,000 fps with triple ultrahigh-speed image sensors

    NASA Astrophysics Data System (ADS)

    Maruyama, Hirotaka; Ohtake, Hiroshi; Hayashida, Tetsuya; Yamada, Masato; Kitamura, Kazuya; Arai, Toshiki; Tanioka, Kenkichi; Etoh, Takeharu G.; Namiki, Jun; Yoshida, Tetsuo; Maruno, Hiromasa; Kondo, Yasushi; Ozaki, Takao; Kanayama, Shigehiro

    2005-03-01

    We developed an ultrahigh-speed, high-sensitivity, color camera that captures moving images of phenomena too fast to be perceived by the human eye. The camera operates well even under restricted lighting conditions. It incorporates a special CCD device that is capable of ultrahigh-speed shots while retaining its high sensitivity. Its ultrahigh-speed shooting capability is made possible by directly connecting CCD storages, which record video images, to photodiodes of individual pixels. Its large photodiode area together with the low-noise characteristic of the CCD contributes to its high sensitivity. The camera can clearly capture events even under poor light conditions, such as during a baseball game at night. Our camera can record the very moment the bat hits the ball.

  19. Reconstruction of refocusing and all-in-focus images based on forward simulation model of plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhang, Rumin; Liu, Peng; Liu, Dijun; Su, Guobin

    2015-12-01

    In this paper, we establish a forward simulation model of plenoptic camera which is implemented by inserting a micro-lens array in a conventional camera. The simulation model is used to emulate how the space objects at different depths are imaged by the main lens then remapped by the micro-lens and finally captured on the 2D sensor. We can easily modify the parameters of the simulation model such as the focal lengths and diameters of the main lens and micro-lens and the number of micro-lens. Employing the spatial integration, the refocused images and all-in-focus images are rendered based on the plenoptic images produced by the model. The forward simulation model can be used to determine the trade-offs between different configurations and to test any new researches related to plenoptic camera without the need of prototype.

  20. Implementation of a continuous scanning procedure and a line scan camera for ?thin-sheet laser imaging microscopy

    PubMed Central

    Schacht, Peter; Johnson, Shane B.; Santi, Peter A.

    2010-01-01

    We report development of a continuous scanning procedure and the use of a time delay integration (TDI) line scan camera for a light-sheet based microscope called a thin-sheet laser imaging microscope (TSLIM). TSLIM is an optimized version of a light-sheet fluorescent microscope that previously used a start/stop scanning procedure to move the specimen through the thinnest portion of a light-sheet and stitched the image columns together to produce a well-focused composite image. In this paper, hardware and software enhancements to TSLIM are described that allow for dual sided, dual illumination lasers, and continuous scanning of the specimen using either a full-frame CCD camera and a TDI line scan camera. These enhancements provided a ~70% reduction in the time required for composite image generation and a ~63% reduction in photobleaching of the specimen compared to the start/stop procedure. PMID:21258493

  1. Lunar Reconnaissance Orbiter Camera Narrow Angle Cameras: Laboratory and Initial Flight Calibration

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Denevi, B. W.; Lawrence, S.; Mahanti, P.; Tran, T. N.; Thomas, P. C.; Eliason, E.; Robinson, M. S.

    2009-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) has two identical Narrow Angle Cameras (NACs). Each NAC is a monochrome pushbroom scanner, providing images with a pixel scale of 50 cm from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of scientific and resource merit, trafficability, and hazards. The North and South poles will be mapped at 1-meter-scale poleward of 85.5 degrees latitude. Stereo coverage is achieved by pointing the NACs off-nadir, which requires planning in advance. Read noise is 91 and 93 e- and the full well capacity is 334,000 and 352,000 e- for NAC-L and NAC-R respectively. Signal-to-noise ranges from 42 for low-reflectance material with 70 degree illumination to 230 for high-reflectance material with 0 degree illumination. Longer exposure times and 2x binning are available to further increase signal-to-noise with loss of spatial resolution. Lossy data compression from 12 bits to 8 bits uses a companding table selected from a set optimized for different signal levels. A model of focal plane temperatures based on flight data is used to command dark levels for individual images, optimizing the performance of the companding tables and providing good matching of the NAC-L and NAC-R images even before calibration. The preliminary NAC calibration pipeline includes a correction for nonlinearity at low signal levels with an offset applied for DN>600 and a logistic function for DN<600. Flight images taken on the limb of the Moon provide a measure of stray light performance. Averages over many lines of images provide a measure of flat field performance in flight. These are comparable with laboratory data taken with a diffusely reflecting uniform panel.

  2. In situ X-ray beam imaging using an off-axis magnifying coded aperture camera system.

    PubMed

    Kachatkou, Anton; Kyele, Nicholas; Scott, Peter; van Silfhout, Roelof

    2013-07-01

    An imaging model and an image reconstruction algorithm for a transparent X-ray beam imaging and position measuring instrument are presented. The instrument relies on a coded aperture camera to record magnified images of the footprint of the incident beam on a thin foil placed in the beam at an oblique angle. The imaging model represents the instrument as a linear system whose impulse response takes into account the image blur owing to the finite thickness of the foil, the shape and size of camera's aperture and detector's point-spread function. The image reconstruction algorithm first removes the image blur using the modelled impulse response function and then corrects for geometrical distortions caused by the foil tilt. The performance of the image reconstruction algorithm was tested in experiments at synchrotron radiation beamlines. The results show that the proposed imaging system produces images of the X-ray beam cross section with a quality comparable with images obtained using X-ray cameras that are exposed to the direct beam. PMID:23765302

  3. Global stratigraphy of the dwarf planet Ceres from RC2 imaging data of the Dawn FC camera

    NASA Astrophysics Data System (ADS)

    Wagner, R. J.; Schmedemann, N.; Kneissl, T.; Stephan, K.; Otto, K.; Krohn, K.; Schrder, S.; Kersten, E.; Roatsch, T.; Jaumann, R.; Williams, D. A.; Yingst, R. A.; Crown, D.; Mest, S. C.; Russell, C. T.

    2015-10-01

    On March 6, 2015, the Dawn spacecraft was captured into orbit around Ceres. During its approach phase since Dec. 1, 2014, imaging data returned by the framing camera (FC) have increased in spatial resolution exceeding that of the Hubble Space Telescope. In this paper, we use these first images to identify and map global geologic units and to establish a stratigraphic sequence.

  4. Performance of the Aspect Camera Assembly for the Advanced X-Ray Astrophysics Facility: Imaging

    NASA Technical Reports Server (NTRS)

    Michaels, Dan

    1998-01-01

    The Aspect Camera Assembly (ACA) is a "state-of-the-art" star tracker that provides real-time attitude information to the Advanced X-Ray Astrophysics Facility - Imaging (AXAF-I), and provides imaging data for "post-facto" ground processing. The ACA consists of a telescope with a CCD focal plane, associated focal plane read-out electronics, and an on-board processor that processes the focal plane data to produce star image location reports. On-board star image locations are resolved to 0.8 arcsec, and post-facto algorithms yield 0.2 arcsec star location accuracies (at end of life). The protoflight ACA has been built, along with a high accuracy vacuum test facility. Image position determination has been verified to < 0.2 arcsec accuracies. This paper is a follow-on paper to one presented by the author at the AeroSense '95 conference. This paper presents the "as built" configuration, the tested performance, and the test facility's design and demonstrated accuracy. The ACA has been delivered in anticipation of a August, 1998 shuttle launch.

  5. CMOS detector arrays in a virtual 10-kilopixel camera for coherent terahertz real-time imaging.

    PubMed

    Boppel, Sebastian; Lisauskas, Alvydas; Max, Alexander; Krozer, Viktor; Roskos, Hartmut G

    2012-02-15

    We demonstrate the principle applicability of antenna-coupled complementary metal oxide semiconductor (CMOS) field-effect transistor arrays as cameras for real-time coherent imaging at 591.4 GHz. By scanning a few detectors across the image plane, we synthesize a focal-plane array of 100×100 pixels with an active area of 20×20 mm2, which is applied to imaging in transmission and reflection geometries. Individual detector pixels exhibit a voltage conversion loss of 24 dB and a noise figure of 41 dB for 16 μW of the local oscillator (LO) drive. For object illumination, we use a radio-frequency (RF) source with 432 μW at 590 GHz. Coherent detection is realized by quasioptical superposition of the image and the LO beam with 247 μW. At an effective frame rate of 17 Hz, we achieve a maximum dynamic range of 30 dB in the center of the image and more than 20 dB within a disk of 18 mm diameter. The system has been used for surface reconstruction resolving a height difference in the μm range. PMID:22344098

  6. Calibration of HST wide field camera for quantitative analysis of faint galaxy images

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.

    1994-01-01

    We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.

  7. Road spectral and morphological characteristics based rectification of the fluctuation effect of mobile spectral line camera imaging

    NASA Astrophysics Data System (ADS)

    Chao, Guo; Lei, Yan

    2014-11-01

    The mobile spectral line camera imaging currently develops quickly as a kind of new terrestrial hyper-spectral remote sensing technique with high potential of acquiring spatial information. However, vehicle moving with fluctuation tends to cause the recorded line images with spatial inconsistency. This study is to find a method to rectify the fluctuation effect common in mobile spectral line camera imaging. The used methods are as follows. First, road spectral characteristics were analyzed and the spectral features were extracted. Second, the morphological features of roads were derived, and the line images were rectified. The third step was to assess the result by comparing the rectified image to the reference acquired by artificial interpretation. The result proved to be basically positive. The rectified image can be applied for registration and fusion with point cloud of laser radar and CCD image for further application. The fusion data is of great significance for the extraction of road environment information and environment monitoring.

  8. Geometric calibration of multi-sensor image fusion system with thermal infrared and low-light camera

    NASA Astrophysics Data System (ADS)

    Peric, Dragana; Lukic, Vojislav; Spanovic, Milana; Sekulic, Radmila; Kocic, Jelena

    2014-10-01

    A calibration platform for geometric calibration of multi-sensor image fusion system is presented in this paper. The accurate geometric calibration of the extrinsic geometric parameters of cameras that uses planar calibration pattern is applied. For calibration procedure specific software is made. Patterns used in geometric calibration are prepared with aim to obtain maximum contrast in both visible and infrared spectral range - using chessboards which fields are made of different emissivity materials. Experiments were held in both indoor and outdoor scenarios. Important results of geometric calibration for multi-sensor image fusion system are extrinsic parameters in form of homography matrices used for homography transformation of the object plane to the image plane. For each camera a corresponding homography matrix is calculated. These matrices can be used for image registration of images from thermal and low light camera. We implemented such image registration algorithm to confirm accuracy of geometric calibration procedure in multi-sensor image fusion system. Results are given for selected patterns - chessboard with fields made of different emissivity materials. For the final image registration algorithm in surveillance system for object tracking we have chosen multi-resolution image registration algorithm which naturally combines with a pyramidal fusion scheme. The image pyramids which are generated at each time step of image registration algorithm may be reused at the fusion stage so that overall number of calculations that must be performed is greatly reduced.

  9. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  10. Light field sensor and real-time panorama imaging multi-camera system and the design of data acquisition

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Tao, Jiayuan; Wang, Keyi

    2014-09-01

    Advanced image sensor and powerful parallel data acquisition chip can be used to collect more detailed and comprehensive light field information. Using multiple single aperture and high resolution sensor record light field data, and processing the light field data real time, we can obtain wide field-of-view (FOV) and high resolution image. Wide FOV and high-resolution imaging has promising application in areas of navigation, surveillance and robotics. Qualityenhanced 3D rending, very high resolution depth map estimation, high dynamic-range and other applications we can obtained when we post-process these large light field data. The FOV and resolution are contradictions in traditional single aperture optic imaging system, and can't be solved very well. We have designed a multi-camera light field data acquisition system, and optimized each sensor's spatial location and relations. It can be used to wide FOV and high resolution real-time image. Using 5 megapixel CMOS sensors, and field programmable Gate Array (FPGA) acquisition light field data, paralleled processing and transmission to PC. A common clock signal is distributed to all of the cameras, and the precision of synchronization each camera achieved 40ns. Using 9 CMOSs build an initial system and obtained high resolution 36060 FOV image. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. In the system we used high speed dedicated camera interface CameraLink for system data transfer. The detail of the hardware architecture, its internal blocks, the algorithms, and the device calibration procedure are presented, along with imaging results.

  11. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  12. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  13. The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.

    2014-04-01

    The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.

  14. Automated co-registration of images from multiple bands of Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Nagasubramanian, V.; Varadan, Geeta

    Three multi-spectral bands of the Liss-4 camera of IRS-P6 satellite are physically separated in the focal plane in the along-track direction. The time separation of 2.1 s between the acquisition of first and last bands causes scan lines acquired by different bands to lie along different lines on the ground which are not parallel. Therefore, the raw images of multi-spectral bands need to be registered prior to any simple application like data visualization. This paper describes a method for co-registration of multiple bands of Liss-4 camera through photogrammetric means using the collinearity equations. A trajectory fit using the given ephemeris and attitude data, followed by direct georeferencing is being employed in this model. It is also augmented with a public domain DEM for the terrain dependent input to the model. Finer offsets after the application of this parametric technique are addressed by matching a small subsection of the bands (100100 pixels) using an image-based method. Resampling is done by going back to original raw data when creating the product after refining image coordinates with the offsets. Two types of aligned products are defined in this paper and their operational flow is described. Datasets covering different types of terrain and also viewed with different geometries are studied with extensive number of points. The band-to-band registration (BBR) accuracies are reported. The algorithm described in this paper for co-registration of Liss-4 bands is an integral part of the software package Value Added Products generation System (VAPS) for operational generation of IRS-P6 data products.

  15. Imaging early demineralization on tooth occlusional surfaces with a high definition InGaAs camera

    NASA Astrophysics Data System (ADS)

    Fried, William A.; Fried, Daniel; Chan, Kenneth H.; Darling, Cynthia L.

    In vivo and in vitro studies have shown that high contrast images of tooth demineralization can be acquired in the near-IR due to the high transparency of dental enamel. The purpose of this study is to compare the lesion contrast in reflectance at near-IR wavelengths coincident with high water absorption with those in the visible, the near-IR at 1300-nm and with fluorescence measurements for early lesions in occlusal surfaces. Twenty-four human molars were used in this in vitro study. Teeth were painted with an acidresistant varnish, leaving a 44 mm window in the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the exposed windows after 1 and 2-day exposure to a demineralizing solution at pH 4.5. Lesions were imaged using NIR reflectance at 3 wavelengths, 1310, 1460 and 1600-nm using a high definition InGaAs camera. Visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm were also used to acquire images for comparison. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. The contrast of both the 24 hr and 48 hr lesions were significantly higher (P<0.05) for NIR reflectance imaging at 1460-nm and 1600-nm than it was for NIR reflectance imaging at 1300-nm, visible reflectance imaging, and fluorescence. The results of this study suggest that NIR reflectance measurements at longer near-IR wavelengths coincident with higher water absorption are better suited for imaging early caries lesions.

  16. Lymphoscintigraphic imaging study for quantitative evaluation of a small field of view (SFOV) gamma camera

    NASA Astrophysics Data System (ADS)

    Alqahtani, M. S.; Lees, J. E.; Bugby, S. L.; Jambi, L. K.; Perkins, A. C.

    2015-07-01

    The Hybrid Compact Gamma Camera (HCGC) is a portable optical-gamma hybrid imager designed for intraoperative medical imaging, particularly for sentinel lymph node biopsy procedures. To investigate the capability of the HCGC in lymphatic system imaging, two lymphoscintigraphic phantoms have been designed and constructed. These phantoms allowed quantitative assessment and evaluation of the HCGC for lymphatic vessel (LV) and sentinel lymph node (SLN) detection. Fused optical and gamma images showed good alignment of the two modalities allowing localisation of activity within the LV and the SLN. At an imaging distance of 10 cm, the spatial resolution of the HCGC during the detection process of the simulated LV was not degraded at a separation of more than 1.5 cm (variation <5%) from the injection site (IS). Even in the presence of the IS the targeted LV was detectable with an acquisition time of less than 2 minutes. The HCGC could detect SLNs containing different radioactivity concentrations (ranging between 1:20 to 1:100 SLN to IS activity ratios) and under various scattering thicknesses (ranging between 5 mm to 30 mm) with high contrast-to-noise ratio (CNR) values (ranging between 11.6 and 110.8). The HCGC can detect the simulated SLNs at various IS to SLN distances, different IS to SLN activity ratios and through varied scattering medium thicknesses. The HCGC provided an accurate physical localisation of radiopharmaceutical uptake in the simulated SLN. These characteristics of the HCGC reflect its suitability for utilisation in lymphatic vessel drainage imaging and SLN imaging in patients in different critical clinical situations such as interventional and surgical procedures.

  17. A Color-Coded Single Camera Three-Dimensional Defocusing Particle Image Velocimetry System

    NASA Astrophysics Data System (ADS)

    Tien, Wei-Hsin; Dabiri, Dana

    2007-11-01

    A color-coded 3-D Defocusing Particle Image Velocimetry (3DDPIV) is a new modification of the 3-D measurement system originally developed by Willert & Gharib (1992). It uses a single lens with 3 color-coded pinholes to overcome limitations of image saturation due to multiple exposures of each particle, and a 3-CCD color camera for image acquisition. The spectrum difference between the color filters and the CCD sensors is solved by a color space linear transformation, separating each pinhole's exposure. The requirement for a high intensity light source prevalent in conventional lighting setups is solved by backlighting the field-of-view and seeding the flow with black particles. An effective pinhole separation, d', is proposed for use with multi-element lenses, and a multi-surface refraction correction to d' is also proposed. Calibration results of the system with and without fluid are performed and compared. The technique is successfully applied to a buoyancy-driven flow, and a three-dimensional velocity field is extracted. The image volume is 3.25mmx2.45mmx1.5mm.

  18. Two Years of Digital Terrain Model Production Using the Lunar Reconnaissance Orbiter Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Burns, K.; Robinson, M. S.; Speyerer, E.; LROC Science Team

    2011-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to gather stereo observations with the Narrow Angle Camera (NAC). These stereo observations are used to generate digital terrain models (DTMs). The NAC has a pixel scale of 0.5 to 2.0 meters but was not designed for stereo observations and thus requires the spacecraft to roll off-nadir to acquire these images. Slews interfere with the data collection of the other instruments, so opportunities are currently limited to four per day. Arizona State University has produced DTMs from 95 stereo pairs for 11 Constellation Project (CxP) sites (Aristarchus, Copernicus crater, Gruithuisen domes, Hortensius domes, Ina D-caldera, Lichtenberg crater, Mare Ingenii, Marius hills, Reiner Gamma, South Pole-Aitkin Rim, Sulpicius Gallus) as well as 30 other regions of scientific interest (including: Bhabha crater, highest and lowest elevation points, Highland Ponds, Kugler Anuchin, Linne Crater, Planck Crater, Slipher crater, Sears Crater, Mandel'shtam Crater, Virtanen Graben, Compton/Belkovich, Rumker Domes, King Crater, Luna 16/20/23/24 landing sites, Ranger 6 landing site, Wiener F Crater, Apollo 11/14/15/17, fresh craters, impact melt flows, Larmor Q crater, Mare Tranquillitatis pit, Hansteen Alpha, Moore F Crater, and Lassell Massif). To generate DTMs, the USGS ISIS software and SOCET SET° from BAE Systems are used. To increase the absolute accuracy of the DTMs, data obtained from the Lunar Orbiter Laser Altimeter (LOLA) is used to coregister the NAC images and define the geodetic reference frame. NAC DTMs have been used in examination of several sites, e.g. Compton-Belkovich, Marius Hills and Ina D-caldera [1-3]. LROC will continue to acquire high-resolution stereo images throughout the science phase of the mission and any extended mission opportunities, thus providing a vital dataset for scientific research as well as future human and robotic exploration. [1] B.L. Jolliff (2011) Nature Geoscience, in press. [2] Lawrence et al. (2011) LPSC XLII, Abst 2228. [3] Garry et al. (2011) LPSC XLII, Abst 2605.

  19. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  20. ANTS a simulation package for secondary scintillation Anger-camera type detector in thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Van Esch, P.; Zeitelhack, K.

    2012-08-01

    A custom and fully interactive simulation package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations) has been developed to optimize the design and operation conditions of secondary scintillation Anger-camera type gaseous detectors for thermal neutron imaging. The simulation code accounts for all physical processes related to the neutron capture, energy deposition pattern, drift of electrons of the primary ionization and secondary scintillation. The photons are traced considering the wavelength-resolved refraction and transmission of the output window. Photo-detection accounts for the wavelength-resolved quantum efficiency, angular response, area sensitivity, gain and single-photoelectron spectra of the photomultipliers (PMTs). The package allows for several geometrical shapes of the PMT photocathode (round, hexagonal and square) and offers a flexible PMT array configuration: up to 100 PMTs in a custom arrangement with the square or hexagonal packing. Several read-out patterns of the PMT array are implemented. Reconstruction of the neutron capture position (projection on the plane of the light emission) is performed using the center of gravity, maximum likelihood or weighted least squares algorithm. Simulation results reproduce well the preliminary results obtained with a small-scale detector prototype. ANTS executables can be downloaded from http://coimbra.lip.pt/~andrei/.

  1. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  2. Experimental Comparison of the High-Speed Imaging Performance of an EM-CCD and sCMOS Camera in a Dynamic Live-Cell Imaging Test Case

    PubMed Central

    Beier, Hope T.; Ibey, Bennett L.

    2014-01-01

    The study of living cells may require advanced imaging techniques to track weak and rapidly changing signals. Fundamental to this need is the recent advancement in camera technology. Two camera types, specifically sCMOS and EM-CCD, promise both high signal-to-noise and high speed (>100 fps), leaving researchers with a critical decision when determining the best technology for their application. In this article, we compare two cameras using a live-cell imaging test case in which small changes in cellular fluorescence must be rapidly detected with high spatial resolution. The EM-CCD maintained an advantage of being able to acquire discernible images with a lower number of photons due to its EM-enhancement. However, if high-resolution images at speeds approaching or exceeding 1000 fps are desired, the flexibility of the full-frame imaging capabilities of sCMOS is superior. PMID:24404178

  3. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  4. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly parallel to the image plane. This effect decreases the sum of the image, thereby also affecting the mean, standard deviation, and SNR of the image. All back-projected events associated with a simulated point source intersected the voxel containing the source and the FWHM of the back-projected image was similar to that obtained from the marching method. Conclusions: The slight deficit to image quality observed with the threshold-based back-projection algorithm described here is outweighed by the 75% reduction in computation time. The implementation of this method requires the development of an optimum threshold function, which determines the overall accuracy of the method. This makes the algorithm well-suited to applications involving the reconstruction of many large images, where the time invested in threshold development is offset by the decreased image reconstruction time. Implemented in a parallel-computing environment, the threshold-based algorithm has the potential to provide real-time dose verification for radiation therapy.

  5. Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2014-04-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.

  6. Mars Orbiter Camera Acquires High Resolution Stereoscopic Images of the Viking One Landing Site

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Two MOC images of the vicinity of the Viking Lander 1 (MOC 23503 and 25403), acquired separately on 12 April 1998 at 08:32 PDT and 21 April 1998 at 13:54 PDT (respectively), are combined here in a stereoscopic anaglyph. The more recent, slightly better quality image is in the red channel, while the earlier image is shown in the blue and green channels. Only the overlap portion of the images is included in the composite.

    Image 23503 was taken at a viewing angle of 31.6o from vertical; 25403 was taken at an angle of 22.4o, for a difference of 9.4o. Although this is not as large a difference as is typically used in stereo mapping, it is sufficient to provide some indication of relief, at least in locations of high relief.

    The image shows the raised rims and deep interiors of the larger impact craters in the area (the largest crater is about 650 m/2100 feet across). It shows that the relief on the ridges is very subtle, and that, in general, the Viking landing site is very flat. This result is, of course, expected: the VL-1 site was chosen specifically because it was likely to have low to very low slopes that represented potential hazards to the spacecraft.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  7. Space-bandwidth extension in parallel phase-shifting digital holography using a four-channel polarization-imaging camera.

    PubMed

    Tahara, Tatsuki; Ito, Yasunori; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2013-07-15

    We propose a method for extending the space bandwidth (SBW) available for recording an object wave in parallel phase-shifting digital holography using a four-channel polarization-imaging camera. A linear spatial carrier of the reference wave is introduced to an optical setup of parallel four-step phase-shifting interferometry using a commercially available polarization-imaging camera that has four polarization-detection channels. Then a hologram required for parallel two-step phase shifting, which is a technique capable of recording the widest SBW in parallel phase shifting, can be obtained. The effectiveness of the proposed method was numerically and experimentally verified. PMID:23939081

  8. A Powerful New Imager for HST: Performance and Early Science Results from Wide Field Camera 3

    NASA Technical Reports Server (NTRS)

    Kimble, Randy

    2009-01-01

    Wide Field Camera 3 (WFC3) was installed into the Hubble Space Telescope during the highly successful Servicing Mission 4 in May, 2009. WFC3 offers sensitive, high resolution imaging over a broad wavelength range from the near UV through the visible to the near IR (200nm - 1700nm). Its capabilities in the near UV and near IR ends of that range represent particularly large advances vs. those of previous HST instruments. In this talk, I will review the purpose and design of the instrument, describe its performance in flight, and highlight some of the initial scientific results from the instrument, including its use in deep infrared surveys in search of galaxies at very high redshift, in investigations of the global processes of star formation in nearby galaxies, and in the study of the recent impact on Jupiter.

  9. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.

  10. First estimates of fumarolic SO2 fluxes from Putana volcano, Chile, using an ultraviolet imaging camera

    NASA Astrophysics Data System (ADS)

    Stebel, K.; Amigo, A.; Thomas, H.; Prata, A. J.

    2015-07-01

    Putana is a stratovolcano in the central Andes volcanic zone in northern Chile on the border with Bolivia. Fumarolic activiy has been visible at its summit crater at 5890 m altitude from long distances since the early 1800s. However, due to its remote location neither detailed geological studies have been made nor gas fluxes have been monitored and therefore its evolution remains unknown. On November 28, 2012 an ultraviolet (UV) imaging camera was transported to Putana and for about 30 min images of the fumaroles were recorded at 12 Hz. These observations provide the first measurements of SO2 fluxes from the fumarolic field of Putana and demonstrate the applicability of the UV camera to detect such emissions. The measurement series was used to assess whether the sampling rate of the data influences the estimate of the gas flux. The results suggest that measurements made at 10 s and 1 min intervals capture the inherent (turbulent) variability in both the plume/wind speed and SO2 flux. Relatively high SO2 fluxes varying between 0.3 kg s- 1 and 1.4 kg s- 1, which translates to 26 t/day and 121 t/day assuming constant degassing throughout the day, were observed on November 28, 2012. Furthermore, we demonstrate how an optical flow algorithm can be integrated with the SO2 retrieval to calculate SO2 fluxes at pixel level. Average values of 0.64 kg s- 1 ± 0.20 kg s- 1 and 0.70 kg s- 1 ± 0.53 kg s- 1 were retrieved from a "classical" transect method and the "advanced" optical flow based retrieval, respectively. Assuming constant emissions throughout all times, these values would results in an average annual SO2 burden of 20-22 kT.

  11. Star Observations by Asteroid Multiband Imaging Camera (AMICA) on Hayabusa (MUSES-C) Cruising Phase

    NASA Astrophysics Data System (ADS)

    Saito, J.; Hashimoto, T.; Kubota, T.; Hayabusa AMICA Team

    Muses-C is the first Japanese asteroid mission and also a technology demonstration one to the S-type asteroid, 25143 Itokawa (1998SF36). It was launched at May 9, 2003, and renamed Hayabusa after the spacecraft was confirmed to be on the interplanetary orbit. This spacecraft has the event of the Earth-swingby for gravitational assist in the way to Itokawa on 2004 May. The arrival to Itokawa is scheduled on 2005 summer. During the visit to Itokawa, the remote-sensing observation with AMICA, NIRS (Near Infrared Spectrometer), XRS (X-ray Fluorescence Spectrometer), and LIDAR are performed, and the spacecraft descends and collects the surface samples at the touch down to the surface. The captured asteroid sample will be returned to the Earth in the middle of 2007. The telescopic optical navigation camera (ONC-T) with seven bandpass filters (and one wide-band filter) and polarizers is called AMICA (Asteroid Multiband Imaging CAmera) when ONC-T is used for scientific observations. The AMICA's seven bandpass filters are nearly equivalent to the seven filters of the ECAS (Eight Color Asteroid Survey) system. Obtained spectroscopic data will be compared with previously obtained ECAS observations. AMICA also has four polarizers, which are located on one edge of the CCD chip (covering 1.1 x 1.1 degrees each). Using the polarizers of AMICA, we can obtain polarimetric information of the target asteroid's surface. Since last November, we planned the test observations of some stars and planets by AMICA and could successfully obtain these images. Here, we briefly report these observations and its calibration by the ground-based observational data. In addition, we also present a current status of AMICA.

  12. Human detection based on the generation of a background image by using a far-infrared light camera.

    PubMed

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective projection. Experimental results with two types of databases confirm that the proposed method outperforms other methods. PMID:25808774

  13. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    PubMed Central

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective projection. Experimental results with two types of databases confirm that the proposed method outperforms other methods. PMID:25808774

  14. Versatile illumination platform and fast optical switch to give standard observation camera gated active imaging capacity

    NASA Astrophysics Data System (ADS)

    Grasser, R.; Peyronneaudi, Benjamin; Yon, Kevin; Aubry, Marie

    2015-10-01

    CILAS, subsidiary of Airbus Defense and Space, develops, manufactures and sales laser-based optronics equipment for defense and homeland security applications. Part of its activity is related to active systems for threat detection, recognition and identification. Active surveillance and active imaging systems are often required to achieve identification capacity in case for long range observation in adverse conditions. In order to ease the deployment of active imaging systems often complex and expensive, CILAS suggests a new concept. It consists on the association of two apparatus working together. On one side, a patented versatile laser platform enables high peak power laser illumination for long range observation. On the other side, a small camera add-on works as a fast optical switch to select photons with specific time of flight only. The association of the versatile illumination platform and the fast optical switch presents itself as an independent body, so called "flash module", giving to virtually any passive observation systems gated active imaging capacity in NIR and SWIR.

  15. Image quality tests on the Canarias InfraRed Camera Experiment (CIRCE)

    NASA Astrophysics Data System (ADS)

    Lasso Cabrera, Nestor M.; Eikenberry, Stephen S.; Garner, Alan; Raines, S. Nicholas; Charcos-Llorens, Miguel V.; Edwards, Michelle L.; Marin-Franch, Antonio

    2012-09-01

    In this paper we present the results of image quality tests performed on the optical system of the Canarias InfraRed Camera Experiment (CIRCE), a visitor-class near-IR imager, spectrograph, and polarimeter for the 10.4 meter Gran Telescopio Canarias (GTC). The CIRCE optical system is comprised of eight gold-coated aluminum alloy 6061 mirrors. We present surface roughness analysis of each individual component as well as optical quality of the whole system. We found all individual mirror surface roughness are within specifications except Fold mirrors 1 and 2. We plan to have these components re-cut and re-coated. We used a flat 0.2-arcseconds pinhole mask placed in the focal plane of the telescope to perform the optical quality tests of the system. The pinhole mask covers the entire field of view of the instrument. The resulting image quality allows seeing-limited performance down to seeing of 0.3 arcseconds FWHM. We also observed that our optical system produces a negative field curvature, which compensates the field curvature of the Ritchey-Chretien GTC design once the instrument is on the telescope.

  16. Performance and calibration of the AXAF High-Resolution Camera I: imaging readout

    NASA Astrophysics Data System (ADS)

    Kenter, Almus T.; Chappell, Jon H.; Kobayashi, K.; Kraft, Ralph P.; Meehan, G. R.; Murray, Stephen S.; Zombeck, Martin V.; Fraser, George W.; Pearson, James F.; Lees, John E.; Brunton, Adam N.; Pearce, Sarah E.; Barbera, Marco; Collura, Alfonso; Serio, Salvatore

    1997-10-01

    The high resolution camera (HRC) will be one of the two focal plane instruments on the Advanced X-ray Astrophysics Facility, (AXAF). AXAF will perform high resolution spectrometry and imaging in the X-ray band of 0.1 to 10 keV. The HRC instrument consists of two detectors, the HRC-I for imaging and the HRC-S for spectroscopy. Each HRC detector consists of a thin aluminized polyimide window, a chevron pair of microchannel plates (MCPs) and a crossed grid charge readout. The HRC-I is a 100 by 100 mm detector optimized for high resolution imaging and timing, the HRC-S is an approximately 30 by 300 mm detector optimized to function as the readout for the low energy transmission grating spectrometer (LETGS). In this paper we present the absolute quantum efficiency, spatial resolution, point spread response function and count rate linearity of the HRC-I detector. Data taken at the HRC laboratory and at the Marshall Space Flight Center X-ray Calibration Facility are presented. The development of the HRC is a collaborative effort between The Smithsonian Astrophysical Observatory, University of Leicester UK and the Osservatorio Astronomico, G.S. Vaiana, Palermo Italy.

  17. HERSCHEL/SCORE, imaging the solar corona in visible and EUV light: CCD camera characterization.

    PubMed

    Pancrazzi, M; Focardi, M; Landini, F; Romoli, M; Fineschi, S; Gherardi, A; Pace, E; Massone, G; Antonucci, E; Moses, D; Newmark, J; Wang, D; Rossi, G

    2010-07-01

    The HERSCHEL (helium resonant scattering in the corona and heliosphere) experiment is a rocket mission that was successfully launched last September from White Sands Missile Range, New Mexico, USA. HERSCHEL was conceived to investigate the solar corona in the extreme UV (EUV) and in the visible broadband polarized brightness and provided, for the first time, a global map of helium in the solar environment. The HERSCHEL payload consisted of a telescope, HERSCHEL EUV Imaging Telescope (HEIT), and two coronagraphs, HECOR (helium coronagraph) and SCORE (sounding coronagraph experiment). The SCORE instrument was designed and developed mainly by Italian research institutes and it is an imaging coronagraph to observe the solar corona from 1.4 to 4 solar radii. SCORE has two detectors for the EUV lines at 121.6 nm (HI) and 30.4 nm (HeII) and the visible broadband polarized brightness. The SCORE UV detector is an intensified CCD with a microchannel plate coupled to a CCD through a fiber-optic bundle. The SCORE visible light detector is a frame-transfer CCD coupled to a polarimeter based on a liquid crystal variable retarder plate. The SCORE coronagraph is described together with the performances of the cameras for imaging the solar corona. PMID:20428852

  18. Portable computer camera with a color image play-back LCD

    NASA Astrophysics Data System (ADS)

    Tsai, Yusheng Tim; Chan, Wen-Hsin

    1995-03-01

    A prototype of portable digital still camera (DSC) used as a computer input device, and a consumer product as well, is developed in the laboratory. The system performs auto focus, auto exposure, and auto white balance, associated with a zoom lens and its controller. The auto exposure algorithm used fuzzy logic and picture segments for exposure decision. A DSP ASIC has been developed for color image processing. The images are compressed with a JPEG IC, and are stored in a PCMCIA memory card. At compression ratio 18 to 1, a 2 Mbyte memory card can store about 36 color images. An energy saving strategy is implemented in a power control module, and results in two hours operation time or 200 pictures by using a normal camcorder battery. A color LCD panel is used for real time display and playback. The SCSI interface driver has been designed to provide PC connection. The system also has NTSC and S-video connection terminals for real time video display.

  19. Nanosecond frame cameras

    SciTech Connect

    Frank, A M; Wilkins, P R

    2001-01-05

    The advent of CCD cameras and computerized data recording has spurred the development of several new cameras and techniques for recording nanosecond images. We have made a side by side comparison of three nanosecond frame cameras, examining them for both performance and operational characteristics. The cameras include; Micro-Channel Plate/CCD, Image Diode/CCD and Image Diode/Film; combinations of gating/data recording. The advantages and disadvantages of each device will be discussed.

  20. Application Of A 1024X1024 Pixel Digital Image Store, With Pulsed Progressive Readout Camera, For Gastro-Intestinal Radiology

    NASA Astrophysics Data System (ADS)

    Edmonds, E. W.; Rowlands, J. A.; Hynes, D. M.; Toth, B. D.; Porter, A. J.

    1986-06-01

    We discuss the applicability of intensified x-ray television systems for general digital radiography and the requirements necessary for physician acceptance. Television systems for videofluorography when limited to conventional fluoroscopic exposure rates (25uR/s to x-ray intensifier), with particular application to the gastro-intestinal system, all suffer from three problems which tend to degrade the image: (a) lack of resolution, (b) noise, and (c) patient movement. The system to be described in this paper addresses each of these problems. Resolution is that provided by the use of a 1024 x 1024 pixel frame store combined with a 1024 line video camera and a 10"/6" x-ray image intensifier. Problems of noise and sensitivity to patient movement are overcome by using a short but intense burst of radiation to produce the latent image, which is then read off the video camera in a progressive fashion and placed in the digital store. Hard copy is produced from a high resolution multiformat camera, or a high resolution digital laser camera. It is intended that this PPR system will replace the 100mm spot film camera in present use, and will provide information in digital form for further processing and eventual digital archiving.

  1. Optimized Collimator Designs for Small Animal SPECT Imaging With a Compact Gamma Camera.

    PubMed

    Qi, Yujin

    2005-01-01

    The aim of this study to design optimized pinhole and parallel-hole collimators for the development of a high-resolution microSPECT system using a compact pixelleted scintillation detector. The detector has a field-of-view of 11cm with pixellated crystal elements of 1.0mm pixel size and 1.12mm pixel pitch. The relative resolution and sensitivity advantages of pinhole and parallel-hole collimators for mice and rats imaging were investigated using analytic formulations and Monte Carlo simulations. The optimized collimator designs were obtained by maximizing the system detection efficiency for a given object resolution. The collimator designs were optimized for 140 keV incident gamma photons. Our results indicate that this small field-of-view compact detector fitted with a conventional high-resolution parallel-hole collimator with 4cm hole-length and 1.2mm hex hole-size couldn't provide better than 2 mm resolution for mice and rats imaging. However, a pinhole collimator with 10cm focal length and 1.0mm aperture size with keel-edge design of 0.5mm channel-height can provide the desired resolutions for imaging mice and rats. The relative efficiency is about 2 times higher than that of the parallel-hole collimator for imaging mice at the distance of 3cm from the collimator. In conclusion, pinhole collimator is superior to parallel-hole collimator and requires sophisticated optimal designs with high-resolution compact gamma camera for small animal imaging. PMID:17282561

  2. Calibration and Validation of Images from the Mars Reconnaissance Orbiter Mars Color Imager (MARCI) and Context Camera (CTX) Instruments

    NASA Astrophysics Data System (ADS)

    Schaeffer, Derek; Bell, J. F., III; Malin, M.; Caplinger, M.; Calvin, W. M.; Cantor, B.; Clancy, R. T.; Haberle, R. M.; James, P. B.; Lee, S.; Thomas, P.; Wolff, M. J.

    2006-09-01

    The MRO CTX instrument is a monochrome (611189; nm), linear array CCD pushbroom camera with a nominal surface resolution of 6 m/pixel. The MARCI instrument is a 2-D CCD framing camera with 5 visible (420, 550, 600, 650, and 720 nm) and 2 UV (260 and 320 nm) filters, a 180 field of view, and a nominal resolution of about 1 km/pixel at nadir. Following Mars Orbital Insertion (MOI) in March 2006, CTX and MARCI images were acquired for initial instrument checkouts and validation of the pre-flight and in-flight calibration pipeline. CTX in-flight bias and dark current levels are derived from masked pixels at the edges of the array. A dark current model derived during pre-flight calibration is applied if the masked pixels exhibit a gradient across the field or noise above an acceptable threshold. The CTX flatfield removes residual pixel non-uniformities and a subtle ''jail bar'' effect caused by the CCD's alternating register readout. Radiances are derived from bias, dark, and flat-corrected images using pre-flight scaling factors. Dividing the average radiances by the solar spectral radiance convolved over the CTX filter transmission and applying a Minnaert phase angle correction yields an average I/F level in the CTX post-MOI Mars images near an expected value of 0.2. Bias and dark current subtraction of the MARCI images uses either a pre-flight model or dark sky data from the far left or far right parts of the field (nominally off the Mars limb). The preflight flatfield data were modified based on in-flight performance to remove residual non-pixel uniformities. Some residual pixel-dependent bias nonuniformities were also corrected using in-flight data. Bias, dark, and flat-corrected images were converted to radiance using pre-flight scaling factors. Phase-corrected 7-filter I/F values for the region of Mars imaged during the post-MOI campaign are consistent with previous data.

  3. In vitro near-infrared imaging of occlusal dental caries using a germanium-enhanced CMOS camera

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel

    2010-02-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  4. Dual-modality imaging in vivo with an NIR and gamma emitter using an intensified CCD camera and a conventional gamma camera

    NASA Astrophysics Data System (ADS)

    Houston, Jessica P.; Ke, Shi; Wang, Wei; Li, Chun; Sevick-Muraca, Eva M.

    2005-04-01

    Fluorescence-enhanced optical imaging measurements and conventional gamma camera images on human M21 melanoma xenografts were acquired for a "dual-modality" molecular imaging study. The avb3 integrin cell surface receptors were imaged using a cyclic peptide, cyclopentapeptide cyclo(lys-Arg-Gly-Asp-phe) [c(KRGDf)] probe which is known to target the membrane receptor. The probe, dual-labeled with a radiotracer, 111Indium, for gamma scintigraphy as well as with a near-infrared dye, IRDye800, was injected into six nude mice at a dose equivalent to 90mCi of 111In and 5 nanomoles of near-infrared (NIR) dye. A 15 min gamma scan and 800 millisecond NIR-sensitive ICCD optical photograph were collected 24 hours after injection of the dual-labeled probe. The image quality between the nuclear and optical data was investigated with the results showing similar target-to-background ratios (TBR) based on the origin of fluorescence and gamma emissions at the targeted tumor site. Furthermore, an analysis of SNR versus contrast showed greater sensitivity of optical over nuclear imaging for the subcutaneous tumor targets measured by surface regions of interest.

  5. Improved determination of volcanic SO2 emission rates from SO2 camera images

    NASA Astrophysics Data System (ADS)

    Klein, Angelika; Lbcke, Peter; Bobrowski, Nicole; Platt, Ulrich

    2015-04-01

    SO2 cameras determine the SO2 emissions of volcanoes with a high temporal and spatial resolution. They thus visualize the plume morphology and give information about turbulence and plume dispersion. Moreover, from SO2 camera image series emission rates can be determined with high time resolution (as will be explained below), these data can help to improve our understanding of variations in the degassing regime of volcanoes. The first step to obtain emission rates is to integrate the column amount of SO2 along two different plume cross sections (ideally perpendicular to the direction of plume propagation); combined with wind speed information this allows the determination of SO2 fluxes. A popular method to determine the mean wind speed relies on estimating the time lag of the SO2 signal derived for two cross sections of the plume at different distances downwind of the source. This can be done by searching the maximum cross-correlation coefficient of the two signals. Another, more sophisticated method to obtain the wind speed is to use the optical flow technique to obtain a more detailed wind field in the plume from a series of SO2 camera images. While the cross correlation method only gives the mean wind speed between the two cross sections of the plume, the optical flow technique allows to determine the wind speed and direction for each pixel individually (in other words, a two-dimensional projection of the entire wind field in the plume is obtained). While optical flow algorithms in general give a more detailed information about the wind velocities in the volcanic plume, they may fail to determine wind speeds in homogeneous regions (i.e. regions with no spatial variation in SO2 column densities) of the plume. Usually the wind speed is automatically set to zero in those regions, which leads to an underestimation of the total SO2 emission flux. This behavior was observed more than once on a data set of SO2 camera images taken at Etna, Italy in July, 2014. For those data the cross-correlation method leads to a more realistic result, which was close to simultaneously measured SO2 fluxes calculated from spectra taken by a zenith looking differential optical absorption spectroscopy (DOAS) instrument traversing underneath the plume. In the analyzed data the flux determined with the cross-correlation method was twice the flux determined with the optical flow algorithm. We further investigated the potential error in the SO2 flux determination caused by a slant view on the plume. This is a situation commonly encountered when observing volcanic SO2-fluxes by remote sensing techniques. Frequently it is difficult to determine the precise angle between wind direction (i.e. plume propagation direction) and observation direction. We find that in volcanic plumes with an inclination that differs more than 20 degree from the assumed wind direction, can cause an error in the determined SO2 flux can deviate from the true value by more than 10 percent.

  6. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with GCPs is not required. Instead, the relative line of sight vector of each detector in different bands in relation to the payload is addressed. With this method a band to band registration accuracy of better than 0.3 pixels could be achieved even in high hill areas.

  7. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground processing. All three cameras were calibrated in the laboratory under ambient conditions. Future thermal vacuum tests will characterize critical behaviors across the full range of lunar operating temperatures. In-flight tests will check for changes in response after launch and provide key data for meeting the requirements of 1% relative and 10% absolute radiometric calibration.

  8. A depth camera for natural human-computer interaction based on near-infrared imaging and structured light

    NASA Astrophysics Data System (ADS)

    Liu, Yue; Wang, Liqiang; Yuan, Bo; Liu, Hao

    2015-08-01

    Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.

  9. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition.

    PubMed

    Sun, Ryan; Bouchard, Matthew B; Hillman, Elizabeth M C

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software's framework and provide details to guide users with development of this and similar software. PMID:21258475

  10. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the softwares framework and provide details to guide users with development of this and similar software. PMID:21258475

  11. Performance Evaluations and Quality Validation System for Optical Gas Imaging Cameras That Visualize Fugitive Hydrocarbon Gas Emissions

    EPA Science Inventory

    Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...

  12. Radioisotope guided surgery with imaging probe, a hand-held high-resolution gamma camera

    NASA Astrophysics Data System (ADS)

    Soluri, A.; Trotta, C.; Scopinaro, F.; Tofani, A.; D'Alessandria, C.; Pasta, V.; Stella, S.; Massari, R.

    2007-12-01

    Since 1997, our group of Physics together with Nuclear Physicians studies imaging probes (IP), hand-held, high-resolution gamma cameras for radio-guided surgery (RGS). Present work is aimed to verify the usefulness of two updated IP in different surgical operations. Forty patients scheduled for breast cancer sentinel node (SN) biopsy, five patients with nodal recurrence of thyroid cancer, seven patients with parathyroid adenomas, five patients with neuroendocrine tumours (NET), were operated under the guide of IP. We used two different IP with field of view of 1 and 4 in. 2, respectively and intrinsic spatial resolution of about 2 mm. Radioisotopes were 99mTc, 123I and 111In. The 1 in. 2 IP detected SN in all the 40 patients and more than one node in 24, whereas anger camera (AC) failed locating SN in four patients and detected true positive second nodes in only nine patients. The 4 in. 2 IP was used for RGS of thyroid, parathyroid and NETs. It detected eight latero-cervical nodes. In the same patients, AC detected five invaded nodes. Parathyroid adenomas detected by IP were 10 in 7 patients, NET five in five patients. One and 4 in. 2 IPs showed usefulness in all operations. Initial studies on SN biopsy were carried out on small series of patients to validate IP and to demonstrate the effectiveness and usefulness of IP alone or against conventional probes. We propose the use of the IP as control method for legal documentation and surgeon strategy guide before and after lesion(s) removal.

  13. Factors affecting the repeatability of gamma camera calibration for quantitative imaging applications using a sealed source

    NASA Astrophysics Data System (ADS)

    Anizan, N.; Wang, H.; Zhou, X. C.; Wahl, R. L.; Frey, E. C.

    2015-02-01

    Several applications in nuclear medicine require absolute activity quantification of single photon emission computed tomography images. Obtaining a repeatable calibration factor that converts voxel values to activity units is essential for these applications. Because source preparation and measurement of the source activity using a radionuclide activity meter are potential sources of variability, this work investigated instrumentation and acquisition factors affecting repeatability using planar acquisition of sealed sources. The calibration factor was calculated for different acquisition and geometry conditions to evaluate the effect of the source size, lateral position of the source in the camera field-of-view (FOV), source-to-camera distance (SCD), and variability over time using sealed Ba-133 sources. A small region of interest (ROI) based on the source dimensions and collimator resolution was investigated to decrease the background effect. A statistical analysis with a mixed-effects model was used to evaluate quantitatively the effect of each variable on the global calibration factor variability. A variation of 1?cm in the measurement of the SCD from the assumed distance of 17?cm led to a variation of 1-2% in the calibration factor measurement using a small disc source (0.4?cm diameter) and less than 1% with a larger rod source (2.9?cm diameter). The lateral position of the source in the FOV and the variability over time had small impacts on calibration factor variability. The residual error component was well estimated by Poisson noise. Repeatability of better than 1% in a calibration factor measurement using a planar acquisition of a sealed source can be reasonably achieved. The best reproducibility was obtained with the largest source with a count rate much higher than the average background in the ROI, and when the SCD was positioned within 5?mm of the desired position. In this case, calibration source variability was limited by the quantum noise.

  14. Development of Electron Tracking Compton Camera using micro pixel gas chamber for medical imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Hattori, Kaori; Kohara, Ryota; Kunieda, Etsuo; Kubo, Atsushi; Kubo, Hidetoshi; Miuchi, Kentaro; Nakahara, Tadaki; Nagayoshi, Tsutomu; Nishimura, Hironobu; Okada, Yoko; Orito, Reiko; Sekiya, Hiroyuki; Shirahata, Takashi; Takada, Atsushi; Tanimori, Toru; Ueno, Kazuki

    2007-10-01

    We have developed the Electron Tracking Compton Camera (ETCC) with reconstructing the 3-D tracks of the scattered electron in Compton process for both sub-MeV and MeV gamma rays. By measuring both the directions and energies of not only the recoil gamma ray but also the scattered electron, the direction of the incident gamma ray is determined for each individual photon. Furthermore, a residual measured angle between the recoil electron and scattered gamma ray is quite powerful for the kinematical background rejection. For the 3-D tracking of the electrons, the Micro Time Projection Chamber (?-TPC) was developed using a new type of the micro pattern gas detector. The ETCC consists of this ?-TPC (10108 cm 3) and the 6613 mm 3 GSO crystal pixel arrays with a flat panel photo-multiplier surrounding the ?-TPC for detecting recoil gamma rays. The ETCC provided the angular resolution of 6.6 (FWHM) at 364 keV of 131I. A mobile ETCC for medical imaging, which is fabricated in a 1 m cubic box, has been operated since October 2005. Here, we present the imaging results for the line sources and the phantom of human thyroid gland using 364 keV gamma rays of 131I.

  15. Electrochemical camera chip for simultaneous imaging of multiple metabolites in biofilms.

    PubMed

    Bellin, Daniel L; Sakhtah, Hassan; Zhang, Yihan; Price-Whelan, Alexa; Dietrich, Lars E P; Shepard, Kenneth L

    2016-01-01

    Monitoring spatial distribution of metabolites in multicellular structures can enhance understanding of the biochemical processes and regulation involved in cellular community development. Here we report on an electrochemical camera chip capable of simultaneous spatial imaging of multiple redox-active phenazine metabolites produced by Pseudomonas aeruginosa PA14 colony biofilms. The chip features an 8 mm × 8 mm array of 1,824 electrodes multiplexed to 38 parallel output channels. Using this chip, we demonstrate potential-sweep-based electrochemical imaging of whole-biofilms at measurement rates in excess of 0.2 s per electrode. Analysis of mutants with various capacities for phenazine production reveals distribution of phenazine-1-carboxylic acid (PCA) throughout the colony, with 5-methylphenazine-1-carboxylic acid (5-MCA) and pyocyanin (PYO) localized to the colony edge. Anaerobic growth on nitrate confirms the O2-dependence of PYO production and indicates an effect of O2 availability on 5-MCA synthesis. This integrated-circuit-based technique promises wide applicability in detecting redox-active species from diverse biological samples. PMID:26813638

  16. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; Li, J.-Y.; Pieters, C. M.; Gaffey, M.; Mittlefehldt, D.; Buratti, B.; Hicks, M